Skip to content

synvo-ai/local-cocoa

Repository files navigation

Local Cocoa Banner

🍫 Local Cocoa: Your Personal Cowork, Fully Local πŸ’»

License: MIT macOS Windows Linux Privacy Powered by


πŸ’» Local Cocoa runs entirely on your device, not inside the cloud.

🧠 Each file turns into memory. Memories form context. Context sparks insight. Insight powers action.

πŸ”’ No externals eyes. No data leaving. Just your computer learning you better, helping you smarter.

🎬 Live Demos

πŸ” File Retrieval πŸ“Š Year-End Report ⌨️ Global Shortcuts
File Retrieval Demo Year-End Report Demo Global Shortcuts Demo
Instantly chat with your local files Scan 2025 files for insights Access Synvo anywhere

Key Features

πŸ›‘οΈ Privacy First

  • πŸ” Fully Local Privacy: All inference, indexing, and retrieval run entirely on your device with zero data leaving.
    • *πŸ’‘ Pro Tip: If you verify network activity using tools like Little Snitch (macOS) or GlassWire (Windows), you'll confirm that no personal data leaves your device.

🧠 Core Intelligence

  • 🧠 Multimodal Memory: Turns documents, images, audio, and video into a persistent semantic memory space.
  • πŸ” Vector-Powered Retrieval: Local Qdrant search with semantic reranking for precise, high-recall answers.
  • πŸ“ Intelligent Indexing: Smartly monitors folders to incrementally index, chunk, and embed efficient vectors.
  • πŸ–Ό Vision Understanding: Integrated OCR and VLM to extract text and meaning from screenshots and PDFs.

⚑ Performance & Experience

  • ⚑ Hardware Accelerated: Optimized llama.cpp engine designed for Apple Silicon and consumer GPUs.
  • 🍫 Focused UX: A calm, responsive interface designed for clarity and seamless interaction.
  • ✍ Integrated Notes: Write notes that become part of your semantic memory for future recall.
  • πŸ” Auto-Sync: Automatically detects file changes and keeps your knowledge base fresh.

πŸ—οΈ Architecture Overview

Local Cocoa runs entirely on your device. It combines file ingestion, intelligent chunking, and local retrieval to build a private on-device knowledge system.

Local Cocoa Architecture Diagram

Frontend: Electron β€’ React β€’ TypeScript β€’ TailwindCSS
Backend: FastAPI β€’ llama.cpp β€’ Qdrant

🎯 The Ultimate Goal of Local Cocoa

Local Cocoa Vision Diagram
We're actively developing these featuresβ€”contributions welcome!
  • πŸ‘‘ More Connectors: Google Drive, Notion, Slack integration
  • 🎀 Voice Mode: Local speech-to-text for voice interaction
  • πŸ”Œ Plugin Ecosystem: Open API for community tools and agents

✨ Contributors

πŸ’‘ Core Contributors

EricFan2002
EricFan2002
Jingkang50
Jingkang50
Tom-TaoQin
Tom‑TaoQin
choiszt
choiszt
KairuiHu
KairuiHu

🌍 Community Contributors

πŸ› οΈ Quick Start

Local Cocoa uses a modern Electron + React + Python FastAPI hybrid architecture.

πŸš€ Prerequisites

Ensure the following are installed on your system:

  • Node.js v18.17 or higher
  • Python v3.10 or higher
  • CMake (for building the llama.cpp server)

Step 1: Clone the Repository

git clone https://github.com/synvo-ai/local-cocoa.git
cd local-cocoa

Step 2: Install Dependencies

# Frontend / Electron
npm install

# Backend / RAG Agent (macOS/Linux)
python3 -m venv .venv
source .venv/bin/activate
pip install -r services/app/requirements.txt

# Backend / RAG Agent (Windows PowerShell)
python -m venv .venv
.venv\Scripts\Activate.ps1
pip install -r services/app/requirements.txt

Step 3: Download Local Models

We provide a script to automatically download embedding, reranker, and vision models:

npm run models:download
Proxy Support (Clash / Shadowsocks / Corporate)

Model downloads support:

  • System proxy (recommended): If Clash/Shadowsocks is set as your OS proxy, downloads will use it automatically.
  • Environment variables: Set one of these (case-insensitive):
    • HTTPS_PROXY / HTTP_PROXY (e.g., http://127.0.0.1:7890)
    • ALL_PROXY (supports socks5://...)
    • NO_PROXY (comma-separated bypass list, e.g., localhost,127.0.0.1)

Windows PowerShell example:

$env:HTTPS_PROXY = "http://127.0.0.1:7890"
$env:NO_PROXY = "localhost,127.0.0.1"
npm run models:download

Step 4: Build Llama Server

Windows Users: If you have pre-compiled binaries, place llama-server.exe in runtime/llama-cpp/bin/.

Build llama-server using CMake:

mkdir -p runtime && cd runtime
git clone https://github.com/ggerganov/llama.cpp.git llama-cpp
cd llama-cpp
mkdir -p build && cd build
cmake .. -DLLAMA_BUILD_SERVER=ON
cmake --build . --target llama-server --config Release
cd ..

# Organize binaries (macOS/Linux)
mkdir -p bin
cp build/bin/llama-server bin/llama-server

# Windows: cp build/bin/Release/llama-server.exe bin/llama-server.exe

cd ../..

Step 5: Build Whisper Server (Speech-to-Text)

To enable transcriptions:

# In runtime folder
cd runtime
git clone https://github.com/ggml-org/whisper.cpp.git whisper-cpp
cd whisper-cpp
cmake -B build
cmake --build build -j --config Release
mv build/bin ./
# The app expects the binary at runtime/whisper-cpp/bin/whisper-server
# For Windows, check build/bin/Release/whisper-server.exe
cd ../..

πŸƒ Run in Development Mode

Ensure your Python virtual environment is active, then run:

# macOS/Linux
source .venv/bin/activate
npm run dev

# Windows PowerShell
.venv\Scripts\Activate.ps1
npm run dev

This launches the React Dev Server, Electron client, and FastAPI backend simultaneously.


🀝 Contributing

We welcome contributions of all kindsβ€”bug fixes, features, or documentation improvements.

Please read our Contribution Guidelines before submitting a Pull Request or Issue.

Quick Guide

  1. Fork the repository
  2. Create your feature branch (git checkout -b feature/amazing-feature)
  3. Make your changes
  4. Commit your changes (git commit -m 'feat: add amazing feature')
    • πŸ” Pre-commit hooks will automatically check your code for errors
    • Run npm run lint:fix to auto-fix common issues
  5. Push to the branch (git push origin feature/amazing-feature)
  6. Open a Pull Request

Code Quality

This project enforces code quality through automated pre-commit hooks:

  • βœ… ESLint checks for unused imports/variables and coding standards
  • βœ… TypeScript ensures type safety
  • βœ… Commits are blocked if errors are found

See CONTRIBUTING.md for details.

Thank you to everyone who has contributed to Local Cocoa! πŸ™

πŸ“„ License

This project is licensed under the MIT License. See the LICENSE file for details.

About

A local AI assistant running on your device. It turns your files into actionable memory.

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Packages

 
 
 

Contributors

Languages