π Advanced LLM-VM
π» Finetuning
Here we describe how to finetune LLMs and load previously trained models
Finetuning Models
The LLM-VM uses a data synthesis feature to engage a teacher/student model for finetuning super specific small models from larger more general ones through a simple process.
- A scalable call is made to OpenAI (or other large parameter LLM)
- βgive me fifty examples like the following queryβ¦β
- The response is parsed into a training set for finetuning
- The
small_model
is trained on the synthezied data set and saved
Finetuning a Model
# import our client
from llm_vm.client import Client
import os
from llm_vm.config import settings
# Instantiate the client specifying which LLM you want to use
client=Client(big_model='chat_gpt', small_model='pythia')
# Put in your prompt and go!
response=client.complete(prompt="Answer question Q. ",
context="Q: What is the currency in myanmmar",
openai_key=settings.openai_api_key,
temperature=0.0,
data_synthesis=True,
finetune=True,)
Loading a finetuned LLM from disk
Loading a finetuned model into the LLM-VM for completions is very straightforward.
- Specify the family of finetuned models in βbig_modelβ
- Specify the specific filename of the finetuned model you wish to load
- Run the
load_finetune(model_filename)
class function to load the model - Generate completions in the usual way
Loading a Finetuned Model
# import our client
from llm_vm.client import Client
import os
# Instantiate the client specifying which LLM you want to use
client=Client(big_model='pythia')
# specify the file name of the finetuned model to load
model_name='<filename_of_your_model>.pt'
client.load_finetune(model_name)
# Put in your prompt and go!
response=client.complete(prompt='What is anarchy?',
context='')
print(response)
Saving finetuned LLMs can result in a large amount of storage space. Make sure to keep an eye on how much youβre saving
Visit our Github Repo
Interested in learning more? Come see the code!