π Quickstart
π Generating Completions
You can quickly start generating completions through OpenAI or locally with the LLM-VM in 3 lines of code. Just specify a big_model
you can select where your completions are generated from.
Examples
Here we give two examples of how you can generate completions with our LLM-VM.
OpenAI Endpoint
calls OpenAIβs gpt-3.5-turbo model for a completion, which requires your OpenAI API Key and utilizes their endpoint.Local Endpoint
example shows you how you can locally use an LLM to generate completions just as easily.
# import our client
from llm_vm.client import Client
# Selecting the Chat GPT endpoint from OpenAI
client=Client(big_model='chat_gpt')
# Put in your prompt and go!
response=client.complete(
prompt='What is Anarchy?',
context='',
openai_key='OPENAI_API_KEY')
print(response)
# Anarchy is a political ideology that advocates for the absence of government...
Using OpenAIβs models require an OpenAI API Key and may result in costs not associated with Anarchyβs LLM-VM
Supported Models
We support several open LLM model families. You can see which ones and the default models used below.
For more information on selecting models visit our Local LLMs section.
Visit our Github Repo
Interested in learning more? Come see the code!