🚧 This project is in its early stages and is currently under active development 🚧
This project is a modular bricolage, purposefully integrating existing tools to expose a local codebase to a suite of language models. The strategy prioritizes cost efficiency by handling routine queries locally, reserving cloud-based models like Claude for high-complexity tasks. A significant advantage is the resulting 'always-live' documentation, which remains synchronized and queriable as the code evolves.
- UV
- Ollama
- Python
- LanceDB
- Continue Extension
- Nomic Embed Text (Any viable embeddings model)
- DeepSeek Coder V2 Lite (Any viable language model/s)
- Visual Studio Code / Cursor (Any branch of VSC should work)
👨🏾🔧 It is assumed that you already have UV, Python, Ollama, VS Code IDE (or equivalent) & Continue Extension installed. If not, please install them before proceeding. 👩🏾🔧
2.1 Install the dependencies - uv sync
2.2 Pull required models:
- Ensure ollama is running
ollama pull deepseek-coder-v2:liteollama pull nomic-embed-text
2.3 Replace the placeholder - "your/path/to/main.py" - with your PATH in the Continue MCP-Server config file here - .continue\mcpServers\pano.yaml
2.4 Ensure that your Continue local-config contents match the - continue_local_example.yaml - config
2.5 Index the codebase - uv run main.py index
2.6 You should now be able to see your MCP Server and Query your codebase: (see below)
| 2.6.a MCP Connection | 2.6.b Ollama Models |
|---|---|
|
|
|
| 2.6.c Successful Query to Codebase | |
|
|
🟧 Switch to a file-change based indexing strategy
🟧 Improve usability to allow drop-in setup for any project
🟧 Evaluate other models for performance & usability (for example - qwen3-embedding:0.6b with qwen3.5:9b)