Get up and running
All the functionality described in previous sections can be accessed almost entirely through an API call. First start ther server with the following command.
This will spin up the flask server with settings specifed in your
Specifying your models
Save your development environment in a
# Default settings for your application BIG_MODEL = "chat_gpt" PORT = 3002 SMALL_MODEL = "bloom" HOST = "127.0.0.1"
The server pulls from this file to determine which LLMs will be used, allowing for reliable behavior every server launch.
Visit our Github Repo
Interested in learning more? Come see the code!