Examples

Here we give two examples of how you can generate completions with our LLM-VM.

  • OpenAI Endpoint calls OpenAI’s gpt-3.5-turbo model for a completion, which requires your OpenAI API Key and utilizes their endpoint.
  • Local Endpoint example shows you how you can locally use an LLM to generate completions just as easily.
# import our client
from llm_vm.client import Client

# Selecting the Chat GPT endpoint from OpenAI 
client=Client(big_model='chat_gpt')

# Put in your prompt and go!
response=client.complete(
	prompt='What is Anarchy?',
	context='',
	openai_key='OPENAI_API_KEY')
print(response)
# Anarchy is a political ideology that advocates for the absence of government...
Using OpenAI’s models require an OpenAI API Key and may result in costs not associated with Anarchy’s LLM-VM

Supported Models

We support several open LLM model families. You can see which ones and the default models used below.

For more information on selecting models visit our Local LLMs section.

Visit our Github Repo

Interested in learning more? Come see the code!