Skip to content

Add Local Model Support #1

@RedLordezh7Venom

Description

@RedLordezh7Venom

Local Model support is necessary for privacy, there are the following ways

  • Ollama, llama.cpp : bundled with the installation, but hard to setup, with dependencies
  • Transformers : could lead to dependency issues
best :
  • Llamafile : Use llamaindex with llamafile for easy setup and run LLM, no other deps , just one file runs the model,
  • it is based on llama.cpp, ggml , compose gcc , that bundles all of them into one exe file for linux, windows, android
  • just install model and run, easily integratable into the setup

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions