Skip to content

[FEATURE] set up sherpa to run using local LLMs #192

@amirfz

Description

@amirfz

PROBLEM
Currently sherpa can only interface with Open AI API which is very limiting in terms of development and testing, and also the range of use cases it can handle (eg. using local files as knowledge base without sending info to third part providers)

SOLUTION
Refactor LLM handling components into its own module and in parallel research and set up a library that allows setting up local LLMs as API (eg. https://ollama.ai/).

Challenges:

  1. how will this impact default prompts in the system? do we need to keep track of several set of prompts?
  2. for deployed sherpa we will continue using open ai - will this switch create too much logistical and manual steps between dev and deployment?
  3. what tests and evaluation gaurdrails are necessary to ensure the system doesn't run into a lot of integration errors and misbehaviors.

ALTERNATIVES
One idea we considered was figuring out a way to use open ai for free or cheaper (for example through their research grant programs). this does not solve the latter problem mentioend above (local use) but also might take a long time to acquire.

OTHER INFO
n/a

Metadata

Metadata

Assignees

Labels

featureproposal to add a new feature

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions