Generating Completions

Generate from prompts in 3 lines

# import our client
from llm_vm.client import Client

# Select which LLM you want to use, here we have openAI's 
client=Client(big_model='chat_gpt')

# Put in your prompt and go!
response=client.complete(prompt='What is Anarchy?',
		     	 context='',
			 openai_key='ENTER_YOUR_API_KEY')
print(response)
# Anarchy is a political ideology that advocates for the absence of government...

Our Client allows you to interact with OpenAI’s completion endpoint seemlessly.

Locally Run an LLM

Choose a popular LLM and use it as easily as chat-gpt

# import our client
from llm_vm.client import Client
import os

# Instantiate the client specifying which LLM you want to use

client=Client(big_model='pythia')

# Put in your prompt and go!
response=client.complete(prompt='What is anarchy?',
                         context='')
print(response)
# Anarchy is a political system in which the state is abolished and the people are free...

Supported Models

Easily select from a selection of popular LLMs

Be sure to check our System Requirements to make sure you can use your desired model.

Supported_Models=[ # Use these strings to call the model
		  'chat_gpt', 'gpt', 'neo', 'llama',
		  'bloom', 'opt', 'pythia']

Picking a Different Model

Need a bigger or smaller version of a supported model?

With LLM-VM this is no problem!

# import our client
from llm_vm.client import Client

# After selecting the supported model, specify a hugging face endpoint.
client=Client(big_model='neo',
       	      big_model_config={'model_uri':'EleutherAI/gpt-neox-20b'}, 
              small_model='neo',
	      small_model_config={'model_uri':'EleutherAI/gpt-neox-125m'})

# Put in your prompt and go!
response=client.complete(prompt='What is Anarchy?', context='')
print(response)
# Anarchy is a political philosophy that advocates no government...

Finetuning

Finetuning an LLM

Finetune intelligently with LLM-VM

Our strategy pairs a small LLM with a larger LLM in a student/teacher relationship.

By carefully designing your prompt and context, you can use our data synthesis technique to quickly scale data sets giving you the training data you need.

# import our client
from llm_vm.client import Client
import os
from llm_vm.config import settings
# Instantiate the client specifying which LLM you want to use
client=Client(big_model='chat_gpt', small_model='pythia')

# Put in your prompt and go!
response=client.complete(prompt="Answer question Q. ",
			 context="Q: What is the currency in myanmmar",
			 openai_key=settings.openai_api_key,
			 temperature=0.0,
			 data_synthesis=True,
			 finetune=True,)

Loading a finetuned LLM from disk

Quickly test and deploy your finetuned models with LLM-VM

Just specify the parent model, your finetuned filename, and you’re almost there!

# import our client
from llm_vm.client import Client
import os

# Instantiate the client specifying which LLM you want to use
client=Client(big_model='pythia')

# specify the file name of the finetuned model to load
model_name='<filename_of_your_model>.pt'
client.load_finetune(model_name)

# Put in your prompt and go!
response=client.complete(prompt='What is anarchy?',
                         context='')
print(response)

Join our Discord Community

Have questions or ideas56? Join the community!