π» Local Cocoa runs entirely on your device, not inside the cloud.
π§ Each file turns into memory. Memories form context. Context sparks insight. Insight powers action.
π No externals eyes. No data leaving. Just your computer learning you better, helping you smarter.
| π File Retrieval | π Year-End Report | β¨οΈ Global Shortcuts |
|---|---|---|
![]() |
![]() |
![]() |
| Instantly chat with your local files | Scan 2025 files for insights | Access Synvo anywhere |
- π Fully Local Privacy: All inference, indexing, and retrieval run entirely on your device with zero data leaving.
- *π‘ Pro Tip: If you verify network activity using tools like Little Snitch (macOS) or GlassWire (Windows), you'll confirm that no personal data leaves your device.
- π§ Multimodal Memory: Turns documents, images, audio, and video into a persistent semantic memory space.
- π Vector-Powered Retrieval: Local Qdrant search with semantic reranking for precise, high-recall answers.
- π Intelligent Indexing: Smartly monitors folders to incrementally index, chunk, and embed efficient vectors.
- πΌ Vision Understanding: Integrated OCR and VLM to extract text and meaning from screenshots and PDFs.
- β‘ Hardware Accelerated: Optimized
llama.cppengine designed for Apple Silicon and consumer GPUs. - π« Focused UX: A calm, responsive interface designed for clarity and seamless interaction.
- β Integrated Notes: Write notes that become part of your semantic memory for future recall.
- π Auto-Sync: Automatically detects file changes and keeps your knowledge base fresh.
Local Cocoa runs entirely on your device. It combines file ingestion, intelligent chunking, and local retrieval to build a private on-device knowledge system.
Frontend: Electron β’ React β’ TypeScript β’ TailwindCSS
Backend: FastAPI β’ llama.cpp β’ Qdrant
- π More Connectors: Google Drive, Notion, Slack integration
- π€ Voice Mode: Local speech-to-text for voice interaction
- π Plugin Ecosystem: Open API for community tools and agents
![]() EricFan2002 |
![]() Jingkang50 |
![]() TomβTaoQin |
![]() choiszt |
![]() KairuiHu |
Local Cocoa uses a modern Electron + React + Python FastAPI hybrid architecture.
Ensure the following are installed on your system:
- Node.js v18.17 or higher
- Python v3.10 or higher
- CMake (for building the
llama.cppserver)
git clone https://github.com/synvo-ai/local-cocoa.git
cd local-cocoa# Frontend / Electron
npm install
# Backend / RAG Agent (macOS/Linux)
python3 -m venv .venv
source .venv/bin/activate
pip install -r services/app/requirements.txt
# Backend / RAG Agent (Windows PowerShell)
python -m venv .venv
.venv\Scripts\Activate.ps1
pip install -r services/app/requirements.txtWe provide a script to automatically download embedding, reranker, and vision models:
npm run models:downloadProxy Support (Clash / Shadowsocks / Corporate)
Model downloads support:
- System proxy (recommended): If Clash/Shadowsocks is set as your OS proxy, downloads will use it automatically.
- Environment variables: Set one of these (case-insensitive):
HTTPS_PROXY/HTTP_PROXY(e.g.,http://127.0.0.1:7890)ALL_PROXY(supportssocks5://...)NO_PROXY(comma-separated bypass list, e.g.,localhost,127.0.0.1)
Windows PowerShell example:
$env:HTTPS_PROXY = "http://127.0.0.1:7890"
$env:NO_PROXY = "localhost,127.0.0.1"
npm run models:downloadWindows Users: If you have pre-compiled binaries, place
llama-server.exeinruntime/llama-cpp/bin/.
Build llama-server using CMake:
mkdir -p runtime && cd runtime
git clone https://github.com/ggerganov/llama.cpp.git llama-cpp
cd llama-cpp
mkdir -p build && cd build
cmake .. -DLLAMA_BUILD_SERVER=ON
cmake --build . --target llama-server --config Release
cd ..
# Organize binaries (macOS/Linux)
mkdir -p bin
cp build/bin/llama-server bin/llama-server
# Windows: cp build/bin/Release/llama-server.exe bin/llama-server.exe
cd ../..To enable transcriptions:
# In runtime folder
cd runtime
git clone https://github.com/ggml-org/whisper.cpp.git whisper-cpp
cd whisper-cpp
cmake -B build
cmake --build build -j --config Release
mv build/bin ./
# The app expects the binary at runtime/whisper-cpp/bin/whisper-server
# For Windows, check build/bin/Release/whisper-server.exe
cd ../..Ensure your Python virtual environment is active, then run:
# macOS/Linux
source .venv/bin/activate
npm run dev
# Windows PowerShell
.venv\Scripts\Activate.ps1
npm run devThis launches the React Dev Server, Electron client, and FastAPI backend simultaneously.
We welcome contributions of all kindsβbug fixes, features, or documentation improvements.
Please read our Contribution Guidelines before submitting a Pull Request or Issue.
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature) - Make your changes
- Commit your changes (
git commit -m 'feat: add amazing feature')- π Pre-commit hooks will automatically check your code for errors
- Run
npm run lint:fixto auto-fix common issues
- Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
This project enforces code quality through automated pre-commit hooks:
- β ESLint checks for unused imports/variables and coding standards
- β TypeScript ensures type safety
- β Commits are blocked if errors are found
See CONTRIBUTING.md for details.
Thank you to everyone who has contributed to Local Cocoa! π
This project is licensed under the MIT License. See the LICENSE file for details.










