Monday Kernel is a local-first, polyglot intelligence engine designed to act as a "Digital Twin" of your technical brain. It bridges the gap between your real-time activity and your long-term memory by integrating OS-level hooks, live audio transcription, and a hybrid knowledge graph.
The project is built using a "Best Tool for the Job" philosophy across three primary languages:
- Rust (Core Orchestrator): Acts as the high-performance "Executive Function," routing data between agents and maintaining thread-safe system state.
- Python (Intelligence Vault & Secretary): Handles the heavy lifting of AI models. It runs OpenAI Whisper for local transcription and manages the ChromaDB (Vector) and Neo4j (Graph) databases.
- C# / .NET (Sentinel UI): Interfaces directly with the Windows API to monitor focused windows and provides the Cerebro HUD—a global
Alt+Spacesearch interface.
- Hybrid GraphRAG: Combines semantic search (Vector) with relational mapping (Graph) to provide context-aware retrieval.
- Sentinel Focus Tracking: Automatically tags every ingested note with the application you were using at that moment (e.g., VS Code, Chrome, Slack).
- Command-Driven Audio: Toggle local audio recording via
/listenslash commands in the search bar to capture meeting notes and thoughts. - 100% Local & Private: No data ever leaves your machine. Whisper, the databases, and the orchestrator all run on localhost.
- Python 3.10+ (Conda recommended)
- Rust 1.75+ (Cargo)
- .NET 9.0 SDK
- FFmpeg (Required for Whisper audio processing)
- Docker Desktop (For Neo4j and ChromaDB)
-
Clone the repository:
git clone https://github.com/YOUR_USERNAME/monday-kernel.git cd monday-kernel -
Setup the Databases:
docker-compose up -d
-
Install Python Dependencies:
pip install -r requirements.txt
-
Build the Sentinel:
cd sentinel-ui/MondaySentinel.App dotnet build
The entire ecosystem is managed by a single "Butter Script."
./run_monday.sh- Alt + Space: Open/Close the Cerebro HUD.
- /listen: Toggle live audio transcription.
- [Query]: Search your brain using natural language.
- OS-level context tracking
- Hybrid Vector/Graph retrieval
- Command-driven live audio transcription
- Next: Local LLM Synthesis (Llama 3 Integration)
- Next: Automated Deep Work "Do Not Disturb" mode
- Create a new file in your root folder:
touch README.md. - Paste the content above.
- Commit it:
git add README.md && git commit -m "docs: add comprehensive readme and architecture overview".