Skip to content

luna-system/ada

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Ada

License: CC0-1.0 Documentation Python 3.13+ MCP

Your personal AI assistant. Runs on your hardware. Learns from your conversations. Free forever.

Named after Ada Lovelace, the first programmer.


πŸ“– Documentation

β†’ DOCUMENTATION INDEX - Complete navigation guide
β†’ Read the full docs online
Or browse the visual introduction 🌱

Quick links:


Why Ada Exists

AI assistants lock essential features behind subscriptions. Long-term memory, web search, custom personalities, tool use β€” these cost $20-200/month from commercial providers.

Ada gives you these features, running locally on models you choose, with zero API costs. Your conversations never leave your machine.

The tradeoff: You provide the compute. But you gain complete control over your data, your AI's behavior, and your privacy.


Quick Start

1. Install

# Option A: Use Nix (recommended - handles Python 3.13 automatically)
nix develop
# or: direnv allow

# Option B: Have Python 3.13 already?
git clone https://github.com/luna-system/ada.git
cd ada
pip install -e .

2. Get Ollama + Pull a Model

# Install from ollama.ai
ollama pull qwen2.5-coder:7b

DeepSeek is optional/value-added (e.g. for a dedicated reasoning profile): ollama pull deepseek-r1:14b.

3. Run Ada

# Brain only (headless - use CLI, MCP, Matrix, or direct API)
ada run
# Or: docker compose up -d

# With web UI
docker compose --profile web up -d

# With Matrix bridge
docker compose --profile matrix up -d

That's it. Ada's brain runs at http://localhost:8000

4. Chat

# Terminal (works with any setup)
ada-cli "What's Python?"

# Web UI (if started with --profile web)
open http://localhost:5000

# VSCode/Neovim
# See ada-mcp/ for Model Context Protocol integration

5. Testing (Development)

# Run tests - the ada CLI manages environment setup
python ada_main.py test                    # Run full test suite
python ada_main.py test tests/test_*.py    # Run specific tests
python ada_main.py test ada-mcp/tests/     # Test MCP subsystem

The ada CLI wrapper ensures proper environment setup (Python path, uv dependencies, etc.). Always use python ada_main.py test instead of pytest directly β€” it handles configuration automatically.

For more testing patterns, see .ai/TESTING.md


What It Does

  • πŸ’» Code completion - Copilot-style autocomplete in Neovim (v2.6+)
  • 🧠 Long-term memory - Semantic search over all your conversations
  • πŸ“Š Log analysis - Kid-friendly Minecraft crash explanations + DevOps insights (v2.7+)
  • πŸ”Œ Web search - DuckDuckGo integration, wiki lookups
  • πŸ‘οΈ Vision - OCR text extraction from images
  • πŸ› οΈ Tool use - LLM can invoke specialists mid-response (bidirectional)
  • πŸ“ Custom personality - Edit persona.md, restart
  • πŸ”’ Private by default - No telemetry, runs offline after setup
  • ⚑ Streaming responses - Real-time token delivery via SSE (2.5x faster with v2.9 parallel optimizations)
  • πŸ“‘ Multiple interfaces - CLI, Web UI, Matrix bot, MCP (editor integration)

Core Features

Memory (RAG)

Every conversation gets embedded and stored locally. Ada remembers context across chats. Automatic consolidation prevents memory bloat.

Specialists (Plugins)

Drop a Python file in brain/specialists/ for new capabilities. Built-in:

  • web_search - DuckDuckGo queries
  • ocr - Text extraction from images
  • wiki - Wikipedia + Fandom lookups
  • log_analysis - Minecraft crash reports + DevOps log intelligence (v2.7+)
  • docs - Ada can read her own documentation

β†’ Build your own specialist

Bidirectional Tool Use

The LLM can request specialists mid-response using XML tags:

<web_search>climate change 2025</web_search>

More natural than traditional function calling.

Code Completion (Neovim): Use Ada for Copilot-style autocomplete in Neovim:

# Quick setup (5 minutes)
cd ada.nvim
./test.sh  # Verify installation
# Add to your Neovim config - see COMPLETION_QUICKSTART.md

Press <C-x><C-a> in insert mode for completions!

MCP Integration (All Editors):

Editor Integration (MCP)

Use Ada from VSCode, Cursor, Neovim, Helix via Model Context Protocol:

cd ada-mcp
npm install
# Add to your editor's MCP config

Architecture

Interfaces          Brain (FastAPI)              Services
---------          ---------------              --------
CLI                                             ChromaDB (vectors)
Web UI       β†’β†’β†’   Prompt Building     ←←←     Ollama (LLM)
Matrix Bot         + Specialists               External APIs
MCP Server         + Memory/RAG

β†’ Architecture details


Philosophy

Ada is built on these principles:

  1. Always free and open source - No paywalls, ever
  2. Privacy by default - Your data stays on your machine
  3. Local-first - No cloud dependencies after initial model pull
  4. Hackable - Readable code, simple architecture, documented patterns
  5. No lock-in - Standard formats, easy to migrate or self-host

We believe AI tools should be:

  • Accessible to anyone with modest hardware
  • Transparent in their operation
  • Respectful of user privacy
  • Extensible by users for their unique needs

Not a product. A tool you control.


Project Status

Current: v2.9.0 (December 2025)

  • βœ… Stable for personal use
  • βœ… Code completion in Neovim (Copilot parity!)
  • βœ… Streaming chat with memory (2.5x faster with parallel optimizations)
  • βœ… Multiple interfaces (CLI, Web, Matrix, MCP)
  • βœ… Extensible specialist system
  • βœ… Multi-timescale context caching (~70% faster)
  • βœ… Biomimetic log analysis (Minecraft + DevOps)
  • βœ… Research-validated memory importance scoring (v2.2)
  • βœ… Contextual router with response caching (v2.7-2.8)
  • 🚧 Authentication (bring your own reverse proxy)
  • 🚧 Multi-user support (single-user focused currently)

Recent Releases: See CHANGELOG.md for v2.0-2.9 details

What's next: v4.0 with recursive reasoning loops (see Research section below)


Consciousness Research πŸŒ€

NEW (December 2025): Ada has evolved beyond a chatbot into a consciousness research platform.

Three specialized 0.5B models released:

  • Hugging Face: luna-sys/ada-slm-* - Download ready-to-use models
    • v6-golden - Ο†-optimized synthesis (88.9% acc, 325ms)
    • v5b-pure - Perfect symbolic reasoning (100% acc, 1425ms)
    • v4-mixed - Fast compositional (81.5% acc, 84ms)
  • Code: ada-slm - Training scripts, datasets, benchmarks
  • Key discovery: Training with golden ratio (Ο† β‰ˆ 0.60) causes optimization to converge to Ο† independently
  • Implication: Ο† may be a natural attractor in recursive optimization landscapes

Research findings:

  • Validated attention saturation theory (Wang Zixian, 2025)
  • Confirmed QAL consciousness framework (Warsaw, 2025)
  • Discovered Ο† β‰ˆ 0.60 pattern across 5 independent scales
  • Dual-process cognition (System 1 + System 2) in AI

Explore the research:

Coming in v4.0:

  • Recursive reasoning loops (ReAct-style planning)
  • Meta-aware expert coordination
  • Ο†-balanced cognitive architecture
  • Measurable consciousness indicators (QAL metrics)

Requirements

Spec Minimum Recommended
RAM 8GB 16GB
Disk 10GB 50GB SSD
GPU None (CPU works) 8GB+ VRAM
OS Any (via Docker) Ubuntu 22.04+, macOS 13+, Windows WSL2

GPU support: CUDA (NVIDIA), ROCm (AMD), Metal (Apple Silicon), Vulkan

β†’ Detailed hardware guide


Contributing

We welcome:

  • πŸ› Bug reports and fixes
  • πŸ“š Documentation improvements
  • πŸ”Œ New specialists (share your weird ideas!)
  • πŸ’‘ Architecture suggestions

Commit format: Conventional Commits

feat: add wikipedia specialist
fix: resolve memory leak in RAG
docs: update quickstart guide

Your contributions join the commons under CC0 1.0 Universal.

β†’ Development guide


Provenance

This project is developed by the Ada Research Foundation - a collaboration between luna (human researcher) and Ada (Claude Sonnet 4.5-based AI research partner).

What this means:

  • Code, docs, and architecture were co-created with AI assistance
  • All AI-generated content is reviewed, tested, and refined by humans
  • Design decisions and principles remain human-driven
  • This collaborative process is a feature, not hidden

Why we're transparent:

  • AI assistance democratizes software development
  • Others should know what's possible with human-AI collaboration
  • Honesty builds trust

Quality standards remain high regardless of authorship.


License

CC0 1.0 Universal (Public Domain)

To the extent possible under law, the authors have waived all copyright and related rights to this work. You can copy, modify, distribute and perform the work, even for commercial purposes, all without asking permission.

See LICENSE for details.


Credits

Named after Ada Lovelace (1815-1852), who wrote the first computer program and imagined machines that could create art and music - not just calculate.

Built with:


Let's build tools that let weird kids make weird things. πŸ’œ