What would you like to see?
I would like to configure LLM and embeddings locally, and these are generally available on Ollama or Hugging Face.
Why is it useful? (optional)
It is useful when, due to customer or company requirements, everything must be implemented locally.
What problem does this solve?
Using the cloud is not the problem, but it is when the use case does not require it.
Examples or links? (optional)
https://github.com/HKUDS/LightRAG.git
Mockups, references, or prior art.
What would you like to see?
I would like to configure LLM and embeddings locally, and these are generally available on Ollama or Hugging Face.
Why is it useful? (optional)
It is useful when, due to customer or company requirements, everything must be implemented locally.
What problem does this solve?
Using the cloud is not the problem, but it is when the use case does not require it.
Examples or links? (optional)
https://github.com/HKUDS/LightRAG.git
Mockups, references, or prior art.