The autonomous, self-improving AI agent. Single Rust binary. Every channel.
Autonomous, self-improving multi-channel AI agent built in Rust. Inspired by Open Claw.
___ ___ _
/ _ \ _ __ ___ _ _ / __|_ _ __ _| |__ ___
| (_) | '_ \/ -_) ' \ | (__| '_/ _` | '_ \(_-<
\___/| .__/\___|_||_| \___|_| \__,_|_.__//__/
|_|
π¦ The autonomous, self-improving AI agent. Single Rust binary. Every channel.
Author: Adolfo Usier
β Star us on GitHub if you like what you see!
OpenCrabs runs as a single binary on your terminal β no server, no gateway, no infrastructure. It makes direct HTTPS calls to LLM providers from your machine. Nothing else leaves your computer.
| OpenCrabs (Rust) | Node.js Frameworks (e.g. Open Claw) | |
|---|---|---|
| Binary size | 17β22 MB single binary, zero dependencies | 1 GB+ node_modules with hundreds of transitive packages |
| Runtime | None β runs natively | Requires Node.js runtime + npm install |
| Attack surface | Zero network listeners. Outbound HTTPS only | Server infrastructure: open ports, auth layers, middleware |
| API key security | Keys on your machine only. zeroize clears them from RAM on drop, [REDACTED] in all debug output |
Keys in env vars or config. GC doesn't guarantee memory clearing. Heap dumps can leak secrets |
| Data residency | 100% local β SQLite DB, embeddings, brain files, all in ~/.opencrabs/ |
Server-side storage, potential multi-tenant data, network transit |
| Supply chain | Single compiled binary. Rust's type system prevents buffer overflows, use-after-free, data races at compile time | npm ecosystem: typosquatting, dependency confusion, prototype pollution |
| Memory safety | Compile-time guarantees β no GC, no null pointers, no data races | GC-managed, prototype pollution, type coercion bugs |
| Concurrency | tokio async + Rust ownership = zero data races guaranteed | Single-threaded event loop, worker threads share memory unsafely |
| Native TTS/STT | Built-in local speech-to-text (whisper.cpp) and text-to-speech β ~130 MB total stack, fully offline | No native voice. Requires external APIs (Google, AWS, Azure) or heavy Python dependencies (PyTorch, ~5 GB+) |
| Telemetry | Zero. No analytics, no tracking, no remote logging | Server infra typically includes monitoring, logging pipelines, APM |
- All chat sessions and messages (SQLite)
- Tool executions (bash, file reads/writes, git)
- Memory and embeddings (local vector search)
- Voice transcription in local STT mode (whisper.cpp, on-device)
- Brain files, config, API keys
- Your messages to the LLM provider API (Anthropic, OpenAI, GitHub Copilot, etc.)
- Web search queries (optional tool)
- GitHub API via
ghCLI (optional tool)
- Screenshots
- Why OpenCrabs?
- Core Features
- Supported AI Providers
- Agent-to-Agent (A2A) Protocol
- Quick Start
- Onboarding Wizard
- API Keys (keys.toml)
- Configuration (config.toml)
- Commands (commands.toml)
- Using Local LLMs
- Configuration
- Tool System
- Keyboard Shortcuts
- Debug and Logging
- Cron Jobs & Heartbeats
- Architecture
- Project Structure
- Development
- Platform Notes
- Troubleshooting
- Companion Tools
- Disclaimers
- Contributing
- License
- Acknowledgments
crabsdemo.mp4
| Feature | Description |
|---|---|
| Multi-Provider | Anthropic Claude, OpenAI, GitHub Copilot (uses your Copilot subscription), OpenRouter (400+ models), MiniMax, Google Gemini, and any OpenAI-compatible API (Ollama, LM Studio, LocalAI). Model lists fetched live from provider APIs β new models available instantly. Each session remembers its provider + model and restores it on switch |
| Fallback Providers | Configure a chain of fallback providers β if the primary fails, each fallback is tried in sequence automatically. Any configured provider can be a fallback. Config: [providers.fallback] providers = ["openrouter", "anthropic"] |
| Per-Provider Vision | Set vision_model per provider β the LLM calls analyze_image as a tool, which uses the vision model on the same provider API to describe images. The chat model stays the same and gets vision capability via tool call. Gemini vision takes priority when configured. Auto-configured for known providers (e.g. MiniMax) on first run |
| Real-time Streaming | Character-by-character response streaming with animated spinner showing model name and live text |
| Local LLM Support | Run with LM Studio, Ollama, or any OpenAI-compatible endpoint β 100% private, zero-cost |
| Cost Tracking | Per-message token count and cost displayed in header; /usage shows all-time breakdown grouped by model with real costs + estimates for historical sessions |
| Context Awareness | Live context usage indicator showing actual token counts (e.g. ctx: 45K/200K (23%)); auto-compaction at 70% with tool overhead budgeting; accurate tiktoken-based counting calibrated against API actuals |
| 3-Tier Memory | (1) Brain MEMORY.md β user-curated durable memory loaded every turn, (2) Daily Logs β auto-compaction summaries at ~/.opencrabs/memory/YYYY-MM-DD.md, (3) Hybrid Memory Search β FTS5 keyword search + local vector embeddings (embeddinggemma-300M, 768-dim) combined via Reciprocal Rank Fusion. Runs entirely local β no API key, no cost, works offline |
| Dynamic Brain System | System brain assembled from workspace MD files (SOUL, IDENTITY, USER, AGENTS, TOOLS, MEMORY) β all editable live between turns |
| Feature | Description |
|---|---|
| Image Attachments | Paste image paths or URLs into the input β auto-detected and attached as vision content blocks for multimodal models |
| PDF Support | Attach PDF files by path β native Anthropic PDF support; for other providers, text is extracted locally via pdf-extract |
| Document Parsing | Built-in parse_document tool extracts text from PDF, DOCX, HTML, TXT, MD, JSON, XML |
| Voice (STT) | Voice notes transcribed via API (Groq Whisper whisper-large-v3-turbo) or Local (whisper.cpp via whisper-rs, runs on-device). Choose mode in /onboard:voice. Local mode: select model size (Tiny 75 MB / Base 142 MB / Small 466 MB / Medium 1.5 GB), download from HuggingFace, zero API cost. Included by default |
| Voice (TTS) | Agent replies to voice notes with audio via API (OpenAI TTS gpt-4o-mini-tts) or Local (Piper TTS, runs on-device via Python venv). Choose mode in /onboard:voice. Local mode: select voice (Ryan / Amy / Lessac / Kristin / Joe / Cori), auto-downloads from HuggingFace, zero API cost. Falls back to text if disabled |
| Attachment Indicator | Attached images show as [IMG1:filename.png] in the input title bar |
| Image Generation | Agent generates images via Google Gemini (gemini-3.1-flash-image-preview "Nano Banana") using the generate_image tool β enabled via /onboard:image. Returned as native images/attachments in all channels |
| Feature | Description |
|---|---|
| Telegram Bot | Full-featured Telegram bot β shared session with TUI, photo/voice support, allowed user IDs, allowed chat/group IDs, respond_to filter (all/dm_only/mention) |
Connect via QR code pairing at runtime or from onboarding wizard. Text + image, shared session with TUI, phone allowlist (allowed_phones), session persists across restarts |
|
| Discord | Full Discord bot β text + image + voice, allowed user IDs, allowed channel IDs, respond_to filter, shared session with TUI. Full proactive control via discord_send (17 actions): send, reply, react, unreact, edit, delete, pin, unpin, create_thread, send_embed, get_messages, list_channels, add_role, remove_role, kick, ban, send_file. Generated images sent as native Discord file attachments |
| Slack | Full Slack bot via Socket Mode β allowed user IDs, allowed channel IDs, respond_to filter, shared session with TUI. Full proactive control via slack_send (17 actions): send, reply, react, unreact, edit, delete, pin, unpin, get_messages, get_channel, list_channels, get_user, list_members, kick_user, set_topic, send_blocks, send_file. Generated images sent as native Slack file uploads. Bot token + app token from api.slack.com/apps (Socket Mode required) |
| Trello | Tool-only by default β the AI acts on Trello only when explicitly asked via trello_send. Opt-in polling via poll_interval_secs in config; when enabled, only @bot_username mentions from allowed users trigger a response. Full card management via trello_send (22 actions): add_comment, create_card, move_card, find_cards, list_boards, get_card, get_card_comments, update_card, archive_card, add_member_to_card, remove_member_from_card, add_label_to_card, remove_label_from_card, add_checklist, add_checklist_item, complete_checklist_item, list_lists, get_board_members, search, get_notifications, mark_notifications_read, add_attachment. API Key + Token from trello.com/power-ups/admin, board IDs and member-ID allowlist configurable |
When users send files, images, or documents across any channel, the agent receives the content automatically β no manual forwarding needed. Example: a user uploads a dashboard screenshot to a Trello card with the comment "I'm seeing this error" β the agent fetches the attachment, passes it through the vision pipeline, and responds with full context.
| Channel | Images (in) | Text files (in) | Documents (in) | Audio (in) | Image gen (out) |
|---|---|---|---|---|---|
| Telegram | β vision pipeline | β extracted inline | β / PDF note | β STT | β native photo |
| β vision pipeline | β extracted inline | β / PDF note | β STT | β native image | |
| Discord | β vision pipeline | β extracted inline | β / PDF note | β STT | β file attachment |
| Slack | β vision pipeline | β extracted inline | β / PDF note | β STT | β file upload |
| Trello | β card attachments β vision | β extracted inline | β | β | β card attachment + embed |
| TUI | β paste path β vision | β paste path β inline | β | β STT | β
[IMG: name] display |
Images are passed to the active model's vision pipeline if it supports multimodal input, or routed to the analyze_image tool (Google Gemini vision) otherwise. Text files (.txt, .md, .json, .csv, source code, etc.) are extracted as UTF-8 and included inline up to 8 000 characters β in the TUI simply paste or type the file path.
| Feature | Description |
|---|---|
| Cursor Navigation | Full cursor movement: Left/Right arrows, Ctrl+Left/Right word jump, Home/End, Delete, Backspace at position |
| Input History | Persistent command history (~/.opencrabs/history.txt), loaded on startup, capped at 500 entries |
| Inline Tool Approval | Claude Code-style β― Yes / Always / No selector with arrow key navigation |
| Inline Plan Approval | Interactive plan review selector (Approve / Reject / Request Changes / View Plan) |
| Session Management | Create, rename, delete sessions with persistent SQLite storage; each session remembers its provider + model β switching sessions auto-restores the provider (no manual /models needed); token counts and context % per session |
| Parallel Sessions | Multiple sessions can have in-flight requests to different providers simultaneously. Send a message in one session, switch to another, send another β both process in parallel. Background sessions auto-approve tool calls; you'll see results when you switch back |
| Scroll While Streaming | Scroll up during streaming without being yanked back to bottom; auto-scroll re-enables when you scroll back down or send a message |
| Compaction Summary | Auto-compaction shows the full summary in chat as a system message β see exactly what the agent remembered |
| Syntax Highlighting | 100+ languages with line numbers via syntect |
| Markdown Rendering | Rich text formatting with code blocks, headings, lists, and inline styles |
| Tool Context Persistence | Tool call groups saved to DB and reconstructed on session reload β no vanishing tool history |
| Multi-line Input | Alt+Enter / Shift+Enter for newlines; Enter to send |
| Abort Processing | EscapeΓ2 within 3 seconds to cancel any in-progress request |
| Feature | Description |
|---|---|
| Full Terminal Access | 30+ built-in tools (file I/O, glob, grep, web search, code execution, image gen/analysis, memory search, cron jobs) plus any CLI tool on your system via bash β GitHub CLI, Docker, SSH, Python, Node, ffmpeg, curl, and everything else just work |
| Per-Session Isolation | Each session is an independent agent with its own provider, model, context, and tool state. Sessions can run tasks in parallel against different providers β ask Claude a question in one session while Kimi works on code in another |
| Self-Sustaining | Agent can modify its own source, build, test, and hot-restart via Unix exec() |
| Self-Improving | Learns from experience β saves reusable workflows as custom commands, writes lessons learned to memory, updates its own brain files. All local, no data leaves your machine |
| Natural Language Commands | Tell OpenCrabs to create slash commands β it writes them to commands.toml autonomously via the config_manager tool |
| Live Settings | Agent can read/write config.toml at runtime; Settings TUI screen (press S) shows current config; approval policy persists across restarts. Default: auto-approve (use /approve to change) |
| Web Search | DuckDuckGo (built-in, no key needed) + EXA AI (neural, free via MCP) by default; Brave Search optional (key in keys.toml) |
| Debug Logging | --debug flag enables file logging; DEBUG_LOGS_LOCATION env var for custom log directory |
| Agent-to-Agent (A2A) | HTTP gateway implementing A2A Protocol RC v1.0 β peer-to-peer agent communication via JSON-RPC 2.0. Supports message/send, message/stream (SSE), tasks/get, tasks/cancel. Built-in a2a_send tool lets the agent proactively call remote A2A agents. Optional Bearer token auth. Includes multi-agent debate (Bee Colony) with confidence-weighted consensus. Task persistence across restarts |
Models: claude-opus-4-6, claude-sonnet-4-5-20250929, claude-haiku-4-5-20251001, plus legacy Claude 3.x models
Setup in keys.toml:
[providers.anthropic]
api_key = "sk-ant-api03-YOUR_KEY"OAuth tokens (sk-ant-oat prefix) are auto-detected β uses Authorization: Bearer with anthropic-beta: oauth-2025-04-20 header automatically.
Features: Streaming, tools, cost tracking, automatic retry with backoff
Models: GPT-5 Turbo, GPT-5
Setup in keys.toml:
[providers.openai]
api_key = "sk-YOUR_KEY"Use your GitHub Copilot subscription β no API charges, no tokens to manage. OpenCrabs authenticates via the same OAuth device flow used by VS Code and other Copilot tools.
Setup β select GitHub Copilot in the onboarding wizard and press Enter. You'll see a one-time code to enter at github.com/login/device. Once authorized, models are fetched from the Copilot API automatically.
Requirements: An active GitHub Copilot subscription (Individual, Business, or Enterprise).
Manual config (without wizard)
The OAuth token is saved automatically during onboarding. If you need to re-authenticate, run /onboard:provider and select GitHub Copilot.
Enable in config.toml:
[providers.github]
enabled = true
default_model = "gpt-4o"
base_url = "https://api.githubcopilot.com/chat/completions"Features: Streaming, tools, OpenAI-compatible API at api.githubcopilot.com. Copilot-specific headers (copilot-integration-id, editor-version) are injected automatically. Short-lived API tokens are refreshed in the background every ~25 minutes.
Setup in keys.toml β get a key at openrouter.ai/keys:
[providers.openrouter]
api_key = "sk-or-YOUR_KEY"Access 400+ models from every major provider through a single API key β Anthropic, OpenAI, Google, Meta, Mistral, DeepSeek, Qwen, and many more. Includes free models (DeepSeek-R1, Llama 3.3, Gemma 2, Mistral 7B) and stealth/preview models as they drop.
Model list is fetched live from the OpenRouter API during onboarding and via /models β no binary update needed when new models are added.
Models: gemini-2.5-flash, gemini-2.0-flash, gemini-1.5-pro β fetched live from the Gemini API
Setup in keys.toml β get a key at aistudio.google.com:
[providers.gemini]
api_key = "AIza..."Enable and set default model in config.toml:
[providers.gemini]
enabled = true
default_model = "gemini-2.5-flash"Features: Streaming, tool use, vision, 1M+ token context window, live model list from /models endpoint
Image generation & vision: Gemini also powers the separate
[image]section forgenerate_imageandanalyze_imageagent tools. See Image Generation & Vision below.
Models: MiniMax-M2.5, MiniMax-M2.1, MiniMax-Text-01
Setup β get your API key from platform.minimax.io. Add to keys.toml:
[providers.minimax]
api_key = "your-api-key"MiniMax is an OpenAI-compatible provider with competitive pricing. It does not expose a /models endpoint, so the model list comes from config.toml (pre-configured with available models).
Use for: Ollama, LM Studio, LocalAI, Groq, or any OpenAI-compatible API.
Setup in config.toml β every custom provider needs a name (the label after custom.):
[providers.custom.lm_studio]
enabled = true
base_url = "http://localhost:1234/v1" # or your endpoint
default_model = "qwen2.5-coder-7b-instruct"
# Optional: list your available models β shows up in /models and /onboard
# so you can switch between them without editing config
models = ["qwen2.5-coder-7b-instruct", "llama-3-8B", "mistral-7B-instruct"]Local LLMs (Ollama, LM Studio): No API key needed β just set
base_urlanddefault_model.Remote APIs (Groq, Together, etc.): Add the key in
keys.tomlusing the same name:[providers.custom.groq] api_key = "your-api-key"
Note:
/chat/completionsis auto-appended to base URLs that don't include it.
Multiple custom providers coexist β define as many as you need with different names and switch between them via /models:
[providers.custom.lm_studio]
enabled = true
base_url = "http://localhost:1234/v1"
default_model = "qwen2.5-coder-7b-instruct"
[providers.custom.ollama]
enabled = false
base_url = "http://localhost:11434/v1"
default_model = "mistral"The name after custom. is a label you choose (e.g. lm_studio, nvidia, groq). The one with enabled = true is active. Keys go in keys.toml using the same label. All configured custom providers persist β switching via /models just toggles enabled.
Kimi K2.5 is a frontier-scale multimodal Mixture-of-Experts (MoE) model available for free on the NVIDIA API Catalog β no billing setup or credit card required. It handles complex reasoning and image/video understanding, making it a strong free alternative to paid models like Claude or Gemini for experimentation and agentic workflows.
Tested and verified with OpenCrabs Custom provider setup.
Quick start:
- Sign up at the NVIDIA API Catalog and verify your account
- Go to the Kimi K2.5 model page and click Get API Key (or "View Code" to see an auto-generated key)
- Configure in OpenCrabs via
/modelsorconfig.toml:
[providers.custom.nvidia]
enabled = true
base_url = "https://integrate.api.nvidia.com/v1"
default_model = "moonshotai/kimi-k2.5"# keys.toml
[providers.custom.nvidia]
api_key = "nvapi-..."Provider priority: MiniMax > OpenRouter > Anthropic > OpenAI > GitHub Copilot > Gemini > Custom. The first provider with enabled = true is used on new sessions. Each provider has its own API key in keys.toml β no sharing or confusion.
Per-session provider: Each session remembers which provider and model it was using. Switch to Claude in one session, Kimi in another β when you /sessions switch between them, the provider restores automatically. No need to /models every time. New sessions inherit the current provider.
If your primary provider goes down, fallback providers are tried automatically in sequence. Any provider with API keys already configured can be a fallback:
[providers.fallback]
enabled = true
providers = ["openrouter", "anthropic"] # tried in order on failureAt runtime, if a request to the primary fails, each fallback is tried until one succeeds. Supports single (provider = "openrouter") or multiple providers.
If your default model doesn't support vision but another model on the same provider does, set vision_model. The LLM calls analyze_image as a tool β the vision model describes the image and returns the description to the chat model as context:
[providers.minimax]
default_model = "MiniMax-M2.5"
vision_model = "MiniMax-Text-01" # describes images for the chat modelMiniMax auto-configures this on first run. Works with any provider β just set vision_model to a vision-capable model on the same API.
OpenCrabs supports image generation and vision analysis via Google Gemini. These features are independent of the main chat provider β you can use Claude for chat and Gemini for images.
- Get a free API key from aistudio.google.com
- Run
/onboard:imagein chat (or go through onboarding Advanced mode) to configure - Or add manually to
keys.toml:
[image]
api_key = "AIza..."And config.toml:
[image.generation]
enabled = true
model = "gemini-3.1-flash-image-preview"
[image.vision]
enabled = true
model = "gemini-3.1-flash-image-preview"When enabled, two tools become available to the agent automatically:
| Tool | Description |
|---|---|
generate_image |
Generate an image from a text prompt β saves to ~/.opencrabs/images/ and returns the file path |
analyze_image |
Analyze an image file or URL via Gemini vision β works even when your main model doesn't support vision |
Example prompts:
- "Generate a pixel art crab logo" β agent calls
generate_image, returns file path - "What's in this image: /tmp/screenshot.png" β agent calls
analyze_imagevia Gemini
Both tools use gemini-3.1-flash-image-preview ("Nano Banana") β Gemini's dedicated image-generation model that supports both vision input and image output in a single request.
OpenCrabs includes a built-in A2A gateway β an HTTP server implementing the A2A Protocol RC v1.0 for peer-to-peer agent communication. Other A2A-compatible agents can discover OpenCrabs, send it tasks, and get results back β all via standard JSON-RPC 2.0. The agent can also proactively call remote A2A agents using the built-in a2a_send tool.
Add to ~/.opencrabs/config.toml:
[a2a]
enabled = true
bind = "127.0.0.1" # Loopback only (default) β use "0.0.0.0" to expose
port = 18790 # Gateway port
# api_key = "your-secret" # Optional Bearer token auth for incoming requests
# allowed_origins = ["http://localhost:3000"] # CORS (empty = blocked)| Endpoint | Method | Description |
|---|---|---|
/.well-known/agent.json |
GET | Agent Card β discover skills, capabilities, supported content types |
/a2a/v1 |
POST | JSON-RPC 2.0 β message/send, message/stream (SSE), tasks/get, tasks/cancel |
/a2a/health |
GET | Health check |
The agent has a built-in a2a_send tool to communicate with remote A2A agents:
| Action | Description |
|---|---|
discover |
Fetch a remote agent's Agent Card (no approval needed) |
send |
Send a task message to a remote agent |
get |
Check status of a remote task |
cancel |
Cancel a running remote task |
The tool supports optional api_key for authenticated endpoints and context_id for multi-turn conversations.
VPS agent (config.toml):
[a2a]
enabled = true
bind = "0.0.0.0"
port = 18790
api_key = "shared-secret"Local agent β connect via SSH tunnel (recommended, no ports to open):
ssh -L 18791:127.0.0.1:18790 user@your-vpsNow the local agent can reach the VPS agent at http://127.0.0.1:18791. The agent will use a2a_send with that URL automatically.
# Discover the agent
curl http://127.0.0.1:18790/.well-known/agent.json | jq .
# Send a message (creates a task)
curl -X POST http://127.0.0.1:18790/a2a/v1 \
-H "Content-Type: application/json" \
-H "Authorization: Bearer your-secret" \
-d '{
"jsonrpc": "2.0",
"id": 1,
"method": "message/send",
"params": {
"message": {
"role": "user",
"parts": [{"text": "What tools do you have?"}]
}
}
}'
# Poll a task by ID
curl -X POST http://127.0.0.1:18790/a2a/v1 \
-H "Content-Type: application/json" \
-H "Authorization: Bearer your-secret" \
-d '{"jsonrpc":"2.0","id":2,"method":"tasks/get","params":{"id":"TASK_ID"}}'
# Cancel a running task
curl -X POST http://127.0.0.1:18790/a2a/v1 \
-H "Content-Type: application/json" \
-H "Authorization: Bearer your-secret" \
-d '{"jsonrpc":"2.0","id":3,"method":"tasks/cancel","params":{"id":"TASK_ID"}}'OpenCrabs supports multi-agent structured debate via the Bee Colony protocol β based on ReConcile (ACL 2024) confidence-weighted voting. Multiple "bee" agents argue across configurable rounds, each enriched with knowledge context from QMD memory search, then converge on a consensus answer with confidence scores.
- Loopback only by default β binds to
127.0.0.1, not0.0.0.0 - Bearer token auth β set
api_keyto requireAuthorization: Bearer <key>on all JSON-RPC requests - CORS locked down β no cross-origin requests unless
allowed_originsis explicitly set - Task persistence β active tasks survive restarts via SQLite
- For public exposure, use a reverse proxy (nginx/Caddy) with TLS + the
api_keyauth
Grab a pre-built binary from GitHub Releases β available for Linux (amd64/arm64), macOS (amd64/arm64), and Windows.
# Download, extract, run
tar xzf opencrabs-linux-amd64.tar.gz
./opencrabsThe onboarding wizard handles everything on first run.
Note:
/rebuildworks even with pre-built binaries β it auto-clones the source to~/.opencrabs/source/on first use, then builds and hot-restarts. For active development or adding custom tools, Option 2 gives you the source tree directly.
# Requires nightly Rust (WhatsApp protocol uses portable_simd)
rustup toolchain install nightly
cargo +nightly install opencrabsLinux (Debian/Ubuntu): Install system deps first:
sudo apt-get install build-essential pkg-config clang libclang-dev libasound2-dev libssl-dev cmakeLarge build: The build can use 8GB+ in
/tmp. If you run out of space:CARGO_TARGET_DIR=~/.cargo/target cargo +nightly install opencrabs
Required for /rebuild, adding custom tools, or modifying the agent.
Prerequisites:
- Rust nightly (2024 edition) β Install Rust, then
rustup toolchain install nightly. The project includes arust-toolchain.tomlthat selects nightly automatically - An API key from at least one supported provider
- SQLite (bundled via sqlx)
- macOS: Xcode CLI Tools +
brew install cmake pkg-config(requires macOS 15+) - Linux (Debian/Ubuntu):
sudo apt-get install build-essential pkg-config clang libclang-dev libasound2-dev libssl-dev cmake - Linux (Fedora/RHEL):
sudo dnf install gcc gcc-c++ make pkg-config openssl-devel cmake - Linux (Arch):
sudo pacman -S base-devel pkg-config openssl cmake
One-liner setup:
bash <(curl -sL https://raw.githubusercontent.com/adolfousier/opencrabs/main/src/scripts/setup.sh)β detects your platform, installs all dependencies, and sets up Rust nightly.
# Clone
git clone https://github.com/adolfousier/opencrabs.git
cd opencrabs
# Build & run (development)
cargo run --bin opencrabs
# Or build release and run directly
cargo build --release
./target/release/opencrabsLinux on older CPUs (Sandy Bridge / AVX1-only, no AVX2): The local STT and embedding engine require at minimum AVX instructions. If your CPU has AVX but not AVX2 (e.g. Intel Sandy Bridge, Ivy Bridge β roughly 2011β2012), you must build with:
RUSTFLAGS="-C target-cpu=native" cargo run --bin opencrabs # or for release: RUSTFLAGS="-C target-cpu=native" cargo build --releaseCPUs without AVX at all are not supported for local STT/embedding. API STT mode works on any machine.
API Keys: OpenCrabs uses
keys.tomlinstead of.envfor API keys. The onboarding wizard will help you set it up, or edit~/.opencrabs/keys.tomldirectly. Keys are handled at runtime β no OS environment pollution.
First run? The onboarding wizard will guide you through provider setup, workspace, and more. See Onboarding Wizard.
Run OpenCrabs in an isolated container. Build takes ~15min (Rust release + LTO).
# Clone and run
git clone https://github.com/adolfousier/opencrabs.git
cd opencrabs
# Run with docker compose
# API keys are mounted from keys.toml on host
docker compose -f src/docker/compose.yml up --buildConfig, workspace, and memory DB persist in a Docker volume across restarts. API keys in keys.toml are mounted into the container at runtime β never baked into the image.
# Interactive TUI (default)
cargo run --bin opencrabs
cargo run --bin opencrabs -- chat
# Onboarding wizard (first-time setup)
cargo run --bin opencrabs -- onboard
cargo run --bin opencrabs -- chat --onboard # Force wizard before chat
# Non-interactive single command
cargo run --bin opencrabs -- run "What is Rust?"
cargo run --bin opencrabs -- run --format json "List 3 programming languages"
cargo run --bin opencrabs -- run --format markdown "Explain async/await"
# Configuration
cargo run --bin opencrabs -- init # Initialize config
cargo run --bin opencrabs -- config # Show current config
cargo run --bin opencrabs -- config --show-secrets
# Database
cargo run --bin opencrabs -- db init # Initialize database
cargo run --bin opencrabs -- db stats # Show statistics
# Debug mode
cargo run --bin opencrabs -- -d # Enable file logging
cargo run --bin opencrabs -- -d run "analyze this"
# Log management
cargo run --bin opencrabs -- logs status
cargo run --bin opencrabs -- logs view
cargo run --bin opencrabs -- logs view -l 100
cargo run --bin opencrabs -- logs clean
cargo run --bin opencrabs -- logs clean -d 3Tip: After
cargo build --release, run the binary directly:./target/release/opencrabs
After downloading or building, add the binary to your PATH so you can run opencrabs from any project directory:
# Symlink (recommended β always points to latest build)
sudo ln -sf $(pwd)/target/release/opencrabs /usr/local/bin/opencrabs
# Or copy
sudo cp target/release/opencrabs /usr/local/bin/Then from any project:
cd /your/project
opencrabsUse /cd inside OpenCrabs to switch working directory at runtime without restarting.
Output formats for non-interactive mode: text (default), json, markdown
First-time users are guided through a 9-step setup wizard that appears automatically after the splash screen.
- Automatic: When no
~/.opencrabs/config.tomlexists and no API keys are set inkeys.toml - CLI:
cargo run --bin opencrabs -- onboard(oropencrabs onboardafter install) - Chat flag:
cargo run --bin opencrabs -- chat --onboardto force the wizard before chat - Slash command: Type
/onboardin the chat to re-run it anytime
| Step | Title | What It Does |
|---|---|---|
| 1 | Mode Selection | QuickStart (sensible defaults) vs Advanced (full control) |
| 2 | Model & Auth | Pick provider (Anthropic, OpenAI, GitHub Copilot, Gemini, OpenRouter, Minimax, Custom) β enter token/key or sign in via OAuth β model list fetched live from API β select model. Auto-detects existing keys from keys.toml |
| 3 | Workspace | Set brain workspace path (default ~/.opencrabs/) β seed template files (SOUL.md, IDENTITY.md, etc.) |
| 4 | Gateway | Configure HTTP API gateway: port, bind address, auth mode |
| 5 | Channels | Toggle messaging integrations (Telegram, Discord, WhatsApp, Slack, Trello) |
| 6 | Voice | Choose STT mode: Off / API (Groq Whisper) / Local (whisper.cpp). Choose TTS mode: Off / API (OpenAI TTS) / Local (Piper TTS). Local modes show model/voice picker with download progress |
| 7 | Image Handling | Enable Gemini image generation and/or vision analysis β uses a separate Google AI key |
| 8 | Daemon | Install background service (systemd on Linux, LaunchAgent on macOS) |
| 9 | Health Check | Verify API key, config, workspace β shows pass/fail summary |
| 10 | Brain Personalization | Tell the agent about yourself and how you want it to behave β AI generates personalized brain files (SOUL.md, IDENTITY.md, USER.md, etc.) |
QuickStart mode skips steps 4-8 with sensible defaults. Advanced mode lets you configure everything.
Type /onboard:voice or /onboard:image in chat to jump directly to Voice or Image setup anytime.
Run speech-to-text on-device with zero API cost. Included by default in prebuilt binaries and cargo +nightly install opencrabs.
In /onboard:voice, select Local mode, pick a model size, and press Enter to download. Models are stored at ~/.local/share/opencrabs/models/whisper/.
Building from source: Local STT requires CMake and a C++ compiler (for whisper.cpp). To exclude it:
cargo +nightly install opencrabs --no-default-features --features telegram,whatsapp,discord,slack,trello
| Model | Size | Quality |
|---|---|---|
| Tiny | ~75 MB | Fast, lower accuracy |
| Base | ~142 MB | Good balance |
| Small | ~466 MB | High accuracy |
| Medium | ~1.5 GB | Best accuracy |
Config (config.toml):
[voice]
stt_enabled = true
stt_mode = "local" # "api" (default) or "local"
local_stt_model = "local-base" # local-tiny, local-base, local-small, local-medium
tts_enabled = true
tts_mode = "local" # "api" (default) or "local"
local_tts_voice = "ryan" # ryan, amy, lessac, kristin, joe, coriTwo input fields: About You (who you are) and Your OpenCrabs (how the agent should behave). The LLM uses these plus the 6 workspace template files to generate personalized brain files.
- First run: Empty fields, static templates as reference β LLM generates β writes to workspace
- Re-run: Fields pre-populated with truncated preview of existing
USER.md/IDENTITY.mdβ edit to regenerate orEscto skip - Regeneration: LLM receives the current workspace files (not static templates), so any manual edits you made are preserved as context
- Overwrite: Only files with new AI-generated content are overwritten; untouched files keep their current state
- No extra persistence files β the brain files themselves are the source of truth
| Key | Action |
|---|---|
Tab / Shift+Tab |
Navigate between fields |
Up / Down |
Scroll through lists |
Enter |
Confirm / next step |
Space |
Toggle checkboxes |
Esc |
Go back one step |
OpenCrabs uses ~/.opencrabs/keys.toml as the single source for all API keys, bot tokens, and search keys. No .env files, no OS keyring, no environment variables for secrets. Keys are loaded at runtime and can be modified by the agent.
# ~/.opencrabs/keys.toml β chmod 600!
# LLM Providers
[providers.anthropic]
api_key = "sk-ant-api03-YOUR_KEY" # or OAuth: "sk-ant-oat01-..."
[providers.openai]
api_key = "sk-YOUR_KEY"
[providers.github]
api_key = "gho_..." # OAuth token (auto-saved by onboarding wizard)
[providers.openrouter]
api_key = "sk-or-YOUR_KEY"
[providers.minimax]
api_key = "your-minimax-key"
[providers.gemini]
api_key = "AIza..." # Get from aistudio.google.com
[providers.custom.your_name]
api_key = "your-key" # not required for local LLMs
# Image Generation & Vision (independent of main chat provider)
[image]
api_key = "AIza..." # Same Google AI key as providers.gemini (can reuse)
# Messaging Channels β tokens/secrets only (config.toml holds allowed_users, allowed_channels, etc.)
[channels.telegram]
token = "123456789:ABCdef..."
[channels.discord]
token = "your-discord-bot-token"
[channels.slack]
token = "xoxb-your-bot-token"
app_token = "xapp-your-app-token" # Required for Socket Mode
[channels.trello]
app_token = "your-trello-api-key" # API Key from trello.com/power-ups/admin
token = "your-trello-api-token" # Token from the authorization URL
# Web Search
[providers.web_search.exa]
api_key = "your-exa-key"
[providers.web_search.brave]
api_key = "your-brave-key"
# Voice (STT/TTS)
# STT API mode (default): uses Groq Whisper
[providers.stt.groq]
api_key = "your-groq-key"
# STT Local mode: no API key needed β runs whisper.cpp on device
# Set stt_mode = "local" and local_stt_model in config.toml
# TTS API mode (default): uses OpenAI TTS
[providers.tts.openai]
api_key = "your-openai-key"
# TTS Local mode: no API key needed β runs Piper TTS on device
# Set tts_mode = "local" and local_tts_voice in config.tomlOAuth tokens (sk-ant-oat prefix) are auto-detected β OpenCrabs uses Authorization: Bearer with the anthropic-beta: oauth-2025-04-20 header automatically.
Trello note:
app_tokenholds the Trello API Key andtokenholds the Trello API Token βapp_tokenis the app-level credential andtokenis the user-level credential. Board IDs are configured viaboard_idsinconfig.toml.
Security: Always
chmod 600 ~/.opencrabs/keys.tomland addkeys.tomlto.gitignore.
OpenCrabs works with any OpenAI-compatible local inference server for 100% private, zero-cost operation.
- Download and install LM Studio
- Download a model (e.g.,
qwen2.5-coder-7b-instruct,Mistral-7B-Instruct,Llama-3-8B) - Start the local server (default port 1234)
- Add to
config.tomlβ no API key needed:
[providers.custom.lm_studio]
enabled = true
base_url = "http://localhost:1234/v1"
default_model = "qwen2.5-coder-7b-instruct" # Must EXACTLY match LM Studio model name
models = ["qwen2.5-coder-7b-instruct", "llama-3-8B", "mistral-7B-instruct"]Critical: The
default_modelvalue must exactly match the model name shown in LM Studio's Local Server tab (case-sensitive).
ollama pull mistralAdd to config.toml β no API key needed:
[providers.custom.ollama]
enabled = true
base_url = "http://localhost:11434/v1"
default_model = "mistral"
models = ["mistral", "llama3", "codellama"]Want both LM Studio and Ollama configured? Use named providers and switch via /models:
[providers.custom.lm_studio]
enabled = true
base_url = "http://localhost:1234/v1"
default_model = "qwen2.5-coder-7b-instruct"
models = ["qwen2.5-coder-7b-instruct", "llama-3-8B", "mistral-7B-instruct"]
[providers.custom.ollama]
enabled = false
base_url = "http://localhost:11434/v1"
default_model = "mistral"
models = ["mistral", "llama3", "codellama"]The name after custom. is just a label you choose. The first one with enabled = true is used. Switch anytime via /models or /onboard.
| Model | RAM | Best For |
|---|---|---|
| Qwen-2.5-7B-Instruct | 16 GB | Coding tasks |
| Mistral-7B-Instruct | 16 GB | General purpose, fast |
| Llama-3-8B-Instruct | 16 GB | Balanced performance |
| DeepSeek-Coder-6.7B | 16 GB | Code-focused |
| TinyLlama-1.1B | 4 GB | Quick responses, lightweight |
Tips:
- Start with Q4_K_M quantization for best speed/quality balance
- Set context length to 8192+ in LM Studio settings
- Use
Ctrl+Nto start a new session if you hit context limits - GPU acceleration significantly improves inference speed
| Aspect | Cloud (Anthropic) | Local (LM Studio) |
|---|---|---|
| Privacy | Data sent to API | 100% private |
| Cost | Per-token pricing | Free after download |
| Speed | 1-2s (network) | 2-10s (hardware-dependent) |
| Quality | Excellent (Claude 4.x) | Good (model-dependent) |
| Offline | Requires internet | Works offline |
See LM_STUDIO_GUIDE.md for detailed setup and troubleshooting.
OpenCrabs uses three config files β all hot-reloaded at runtime (no restart needed):
| File | Purpose | Secret? |
|---|---|---|
~/.opencrabs/config.toml |
Provider settings, models, channels, allowed users | No β safe to commit |
~/.opencrabs/keys.toml |
API keys, bot tokens | Yes β chmod 600, never commit |
~/.opencrabs/commands.toml |
User-defined slash commands | No |
Changes to any of these files are picked up automatically within ~300ms while OpenCrabs is running. The active LLM provider, channel allowlists, approval policy, and slash command autocomplete all update without restart.
Search order for config.toml:
~/.opencrabs/config.toml(primary)~/.config/opencrabs/config.toml(legacy fallback)./opencrabs.toml(current directory override)
Full annotated example β the onboarding wizard writes this for you, but you can edit it directly:
# ~/.opencrabs/config.toml
[agent]
approval_policy = "auto-always" # auto-always (default) | auto-session | ask
working_directory = "~/projects" # default working dir for Bash/file tools
# ββ Channels ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
[channels.telegram]
enabled = true
allowed_users = ["123456789"] # Telegram user IDs (get yours via /start)
respond_to = "all" # all | mention | dm_only
[channels.discord]
enabled = true
allowed_users = ["637291214508654633"] # Discord user IDs
allowed_channels = ["1473207147025137778"]
respond_to = "mention" # all | mention | dm_only
[channels.slack]
enabled = true
allowed_users = ["U066SGWQZFG"] # Slack user IDs
allowed_channels = ["C0AEY3C2P9V"]
respond_to = "mention" # all | mention | dm_only
[channels.whatsapp]
enabled = true
allowed_phones = ["+1234567890"] # E.164 format
[channels.trello]
enabled = true
board_ids = ["your-board-id"] # From the board URL
# ββ Providers βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
[providers.anthropic]
enabled = true
default_model = "claude-sonnet-4-6"
[providers.gemini]
enabled = false
[providers.openai]
enabled = false
default_model = "gpt-5-nano"
# ββ Image βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
[image.generation]
enabled = true
model = "gemini-3.1-flash-image-preview"
[image.vision]
enabled = true
model = "gemini-3.1-flash-image-preview"API keys go in
keys.toml, not here. See API Keys (keys.toml).
User-defined slash commands β the agent writes these autonomously via the config_manager tool, or you can edit directly:
# ~/.opencrabs/commands.toml
[[commands]]
name = "/deploy"
description = "Deploy to staging server"
action = "prompt"
prompt = "Run ./deploy.sh staging and report the result."
[[commands]]
name = "/standup"
description = "Generate a daily standup summary"
action = "prompt"
prompt = "Summarize my recent git commits and open tasks for a standup. Be concise."
[[commands]]
name = "/rebuild"
description = "Build and restart OpenCrabs from source"
action = "prompt"
prompt = 'Run `RUSTFLAGS="-C target-cpu=native" cargo build --release` in /srv/rs/opencrabs. If it succeeds, ask if I want to restart now.'Commands appear instantly in autocomplete (type /) after saving β no restart needed. The action field supports:
"prompt"β sends the prompt text to the agent for execution"system"β displays the text inline as a system message
Keep multiple providers configured β enable the one you want to use, disable the rest.
Switch anytime by toggling enabled or using /onboard.
In config.toml:
# Local LLM β currently active
[providers.custom.lm_studio]
enabled = true
base_url = "http://localhost:1234/v1"
default_model = "qwen2.5-coder-7b-instruct"
models = ["qwen2.5-coder-7b-instruct", "llama-3-8B"]
# Cloud API β disabled, enable when you need it
[providers.anthropic]
enabled = false
default_model = "claude-opus-4-6"In keys.toml:
[providers.anthropic]
api_key = "sk-ant-api03-YOUR_KEY"All API keys and secrets are stored in keys.toml β not in environment variables. The only env vars OpenCrabs uses are operational:
| Variable | Description |
|---|---|
DEBUG_LOGS_LOCATION |
Custom log directory path (default: .opencrabs/logs/) |
OPENCRABS_BRAIN_PATH |
Custom brain workspace path (default: ~/.opencrabs/) |
OpenCrabs tracks real token costs per model using a centralized pricing table at ~/.opencrabs/usage_pricing.toml. It's written automatically on first run with sensible defaults.
Why it matters:
/usageshows real costs grouped by model across all sessions- Old sessions with stored tokens but zero cost get estimated costs (shown as
~$X.XXin yellow) - Unknown models show
$0.00instead of silently ignoring them
Customizing prices:
# ~/.opencrabs/usage_pricing.toml
# Edit live β changes take effect on next /usage open, no restart needed.
[providers.anthropic]
entries = [
{ prefix = "claude-sonnet-4", input_per_m = 3.0, output_per_m = 15.0 },
{ prefix = "claude-opus-4", input_per_m = 5.0, output_per_m = 25.0 },
{ prefix = "claude-haiku-4", input_per_m = 1.0, output_per_m = 5.0 },
]
[providers.minimax]
entries = [
{ prefix = "minimax-m2.5", input_per_m = 0.30, output_per_m = 1.20 },
]
# Add any provider β prefix is matched case-insensitively as a substring
[providers.my_custom_model]
entries = [
{ prefix = "my-model-v1", input_per_m = 1.00, output_per_m = 3.00 },
]A full example with all built-in providers (Anthropic, OpenAI, MiniMax, Google, DeepSeek, Meta) is available at usage_pricing.toml.example in the repo root.
OpenCrabs includes 30+ built-in tools. The AI can use these during conversation:
| Tool | Description |
|---|---|
read_file |
Read file contents with syntax awareness |
write_file |
Create or modify files |
edit_file |
Precise text replacements in files |
bash |
Execute shell commands β any CLI tool on your system works |
ls |
List directory contents |
glob |
Find files matching patterns |
grep |
Search file contents with regex |
execute_code |
Run code in various languages |
notebook_edit |
Edit Jupyter notebooks |
parse_document |
Extract text from PDF, DOCX, HTML |
| Tool | Description |
|---|---|
web_search |
Search the web (DuckDuckGo, always available, no key needed) |
exa_search |
Neural web search via EXA AI (free via MCP, no API key needed; set key in keys.toml for higher rate limits) |
brave_search |
Web search via Brave Search (set key in keys.toml β free $5/mo credits at brave.com/search/api) |
http_request |
Make HTTP requests |
memory_search |
Hybrid semantic search across past memory logs β FTS5 keyword + vector embeddings (768-dim, local GGUF model) combined via RRF. No API key needed, runs offline |
| Tool | Description |
|---|---|
generate_image |
Generate images via Google Gemini β auto-sent as native images on all channels |
analyze_image |
Analyze images (local files or URLs) via vision model β uses Gemini vision or provider's vision_model |
| Tool | Description |
|---|---|
telegram_send |
19 actions: send, reply, edit, delete, pin, forward, send_photo, send_document, polls, buttons, admin ops |
discord_send |
17 actions: send, reply, react, edit, delete, pin, threads, embeds, roles, kick, ban, send_file |
slack_send |
17 actions: send, reply, react, edit, delete, pin, blocks, topics, members, send_file |
trello_send |
22 actions: cards, comments, checklists, labels, members, attachments, board management, search |
channel_search |
Search captured message history across all channels (Telegram, Discord, Slack, WhatsApp) |
| Tool | Description |
|---|---|
task_manager |
Manage agent tasks |
plan |
Create structured execution plans |
config_manager |
Read/write config.toml and commands.toml at runtime (change settings, add/remove commands, reload config) |
session_context |
Access session information |
cron_manage |
Schedule recurring jobs β create, list, enable/disable, delete. Deliver results to any channel |
a2a_send |
Send tasks to remote A2A-compatible agents via JSON-RPC 2.0 |
evolve |
Download latest release binary from GitHub and hot-restart (no Rust toolchain needed) |
rebuild |
Build from source (cargo build --release) and hot-restart |
OpenCrabs can leverage any CLI tool installed on your system via bash. Common integrations:
| Tool | Purpose | Example |
|---|---|---|
gh |
GitHub CLI β issues, PRs, repos, releases, actions | gh issue list, gh pr create |
gog |
Google CLI β Gmail, Calendar (OAuth) | gog gmail search "is:unread", gog calendar events |
docker |
Container management | docker ps, docker compose up |
ssh |
Remote server access | ssh user@host "command" |
node |
Run JavaScript/TypeScript tools | node script.js |
python3 |
Run Python scripts and tools | python3 analyze.py |
ffmpeg |
Audio/video processing | ffmpeg -i input.mp4 output.gif |
curl |
HTTP requests (fallback when http_request insufficient) |
curl -s api.example.com |
Any tool on your $PATH works. If it runs in your terminal, OpenCrabs can use it.
| Shortcut | Action |
|---|---|
Ctrl+C |
First press clears input, second press (within 3s) quits |
Ctrl+N |
New session |
Ctrl+L |
List/switch sessions |
Ctrl+K |
Clear current session |
Page Up/Down |
Scroll chat history |
Mouse Scroll |
Scroll chat history |
Escape |
Clear input / close overlay |
| Shortcut | Action |
|---|---|
Enter |
Send message |
Ctrl+J |
New line (vim β also Alt+Enter / Shift+Enter on supported terminals) |
β / β |
Move cursor one character |
β / β |
Navigate lines (multiline), jump to start/end (single-line), then history |
Ctrl+β / Ctrl+β |
Jump by word |
Home / End |
Start / end of current line |
Delete |
Delete character after cursor |
Ctrl+W |
Delete word before cursor (vim) |
Ctrl+U |
Delete to start of line (vim) |
Left-click |
Select/highlight a message |
Right-click |
Copy message to clipboard |
Escape Γ2 |
Abort in-progress request |
/help |
Open help dialog |
/model |
Show current model |
/models |
Switch model (fetches live from provider API) |
/usage |
Token/cost stats β shows current session + all-time breakdown grouped by model with estimated costs for historical sessions |
/onboard |
Run setup wizard (full flow) |
/onboard:provider |
Jump to provider/API key setup |
/onboard:workspace |
Jump to workspace settings |
/onboard:channels |
Jump to channel config |
/onboard:voice |
Jump to voice STT/TTS setup |
/onboard:image |
Jump to image handling setup |
/onboard:gateway |
Jump to API gateway settings |
/onboard:brain |
Jump to brain/persona setup |
/doctor |
Run connection health check |
/sessions |
Open session manager |
/approve |
Tool approval policy selector (approve-only / session / yolo) |
/compact |
Compact context (summarize + trim for long sessions) |
/rebuild |
Build from source & hot-restart β streams live compiler output to chat, auto exec() restarts on success (no prompt), auto-clones repo if no source tree found |
/whisper |
Voice-to-text β speak anywhere, pastes to clipboard |
/cd |
Change working directory (directory picker) |
/settings or S |
Open Settings screen (provider, approval, commands, paths) |
/stop |
Abort in-progress agent operation (channels only β TUI uses Escape x2) |
When connected via messaging channels, the following slash commands are available directly in chat. These are the channel equivalents of TUI commands β type them as regular messages.
| Command | Action |
|---|---|
/help |
List available channel commands |
/usage |
Session token & cost stats (current session + all-time breakdown by model) |
/models |
Switch AI model β shows platform-native buttons (Telegram inline keyboard, Discord buttons, Slack Block Kit). WhatsApp shows a plain text list |
/stop |
Abort the current agent operation immediately β cancels streaming, tool execution, and any pending approvals. Equivalent to double-Escape in the TUI |
Model switching via /models changes the model within the current provider and takes effect immediately (no restart needed). The selection persists to config.toml.
Any message that isn't a recognized command is forwarded to the AI agent as normal.
Each session shows its provider/model badge (e.g. [anthropic/claude-sonnet-4-6]) and token count. Sessions processing in the background show a spinner; sessions with unread responses show a green dot.
| Shortcut | Action |
|---|---|
β / β |
Navigate sessions |
Enter |
Load selected session (auto-restores its provider + model) |
R |
Rename session |
D |
Delete session |
Esc |
Back to chat |
When the AI requests a tool that needs permission, an inline approval prompt appears in chat. Approvals are session-aware: background sessions auto-approve tool calls so they don't block, and switching sessions never loses a pending approval.
| Shortcut | Action |
|---|---|
β / β |
Navigate approval options |
Enter |
Confirm selected option |
D / Esc |
Deny the tool request |
V |
Toggle parameter details |
Approval options (TUI and all channels):
| Option | Effect |
|---|---|
| Yes | Approve this single tool call |
| Always (session) | Auto-approve all tools for this session (resets on restart) |
| YOLO (permanent) | Auto-approve all tools permanently, persists to config.toml |
| No | Deny this tool call |
Use /approve to change your approval policy at any time (persisted to config.toml):
| Policy | Description |
|---|---|
| Approve-only | Prompt before every tool execution. Use this if you want to review each action the agent takes. Set with /approve β "Approve-only (always ask)" |
| Allow all (session) | Auto-approve all tools for the current session only, resets on restart |
| Yolo mode | Execute everything without approval (default for new users). Set with /approve β "Yolo mode" |
Note: New installations default to Yolo mode so the agent can work autonomously out of the box. If you prefer to review each tool call, run
/approveand select Approve-only (always ask).
OpenCrabs uses a conditional logging system β no log files by default.
# Enable debug mode (creates log files)
opencrabs -d
cargo run -- -d
# Logs stored in ~/.opencrabs/logs/ (user workspace, not in repo)
# Daily rolling rotation, auto-cleanup after 7 days
# Management
opencrabs logs status # Check logging status
opencrabs logs view # View recent entries
opencrabs logs clean # Clean old logs
opencrabs logs clean -d 3 # Clean logs older than 3 daysWhen debug mode is enabled:
- Log files created in
~/.opencrabs/logs/ - DEBUG level with thread IDs, file names, line numbers
- Daily rolling rotation
When disabled (default):
- No log files created
- Only warnings and errors to stderr
- Clean workspace
OpenCrabs's brain is dynamic and self-sustaining. Instead of a hardcoded system prompt, the agent assembles its personality, knowledge, and behavior from workspace files that can be edited between turns.
The brain reads markdown files from ~/.opencrabs/:
~/.opencrabs/ # Home β everything lives here
βββ SOUL.md # Personality, tone, hard behavioral rules
βββ IDENTITY.md # Agent name, vibe, style, workspace path
βββ USER.md # Who the human is, how to work with them
βββ AGENTS.md # Workspace rules, memory system, safety policies
βββ TOOLS.md # Environment-specific notes (SSH hosts, API accounts)
βββ MEMORY.md # Long-term curated context (never touched by auto-compaction)
βββ SECURITY.md # Security policies and access controls
βββ BOOT.md # Startup checklist (optional, runs on launch)
βββ HEARTBEAT.md # Periodic task definitions (optional)
βββ BOOTSTRAP.md # First-run onboarding wizard (deleted after setup)
βββ config.toml # App configuration (provider, model, approval policy)
βββ keys.toml # API keys (provider, channel, STT/TTS)
βββ commands.toml # User-defined slash commands
βββ opencrabs.db # SQLite β sessions, messages, plans
βββ memory/ # Daily memory logs (auto-compaction summaries)
βββ YYYY-MM-DD.md # One per day, multiple compactions stack
Brain files are re-read every turn β edit them between messages and the agent immediately reflects the changes. Missing files are silently skipped; a hardcoded brain preamble is always present.
| Tier | Location | Purpose | Managed By |
|---|---|---|---|
| 1. Brain MEMORY.md | ~/.opencrabs/MEMORY.md |
Durable, curated knowledge loaded into system brain every turn | You (the user) |
| 2. Daily Memory Logs | ~/.opencrabs/memory/YYYY-MM-DD.md |
Auto-compaction summaries with structured breakdowns of each session | Auto (on compaction) |
| 3. Hybrid Memory Search | memory_search tool (FTS5 + vector) |
Hybrid semantic search β BM25 keyword + vector embeddings (768-dim, local GGUF) combined via Reciprocal Rank Fusion. No API key, zero cost, runs offline | Agent (via tool call) |
How it works:
- When context hits 70%, auto-compaction summarizes the conversation into a structured breakdown (current task, decisions, files modified, errors, next steps)
- The summary is saved to a daily log at
~/.opencrabs/memory/2026-02-15.md(multiple compactions per day stack in the same file) - The summary is shown to you in chat so you see exactly what was remembered
- The file is indexed in the background into the FTS5 database so the agent can search past logs with
memory_search - Brain
MEMORY.mdis never touched by auto-compaction β it stays as your curated, always-loaded context
Memory search combines two strategies via Reciprocal Rank Fusion (RRF) for best-of-both-worlds recall:
- FTS5 keyword search β BM25-ranked full-text matching with porter stemming
- Vector semantic search β 768-dimensional embeddings via a local GGUF model (embeddinggemma-300M, ~300 MB)
The embedding model downloads automatically on first TUI launch (~300 MB, one-time) and runs entirely on CPU. No API key, no cloud service, no per-query cost, works offline. If the model isn't available yet (first launch, still downloading), search gracefully falls back to FTS-only.
βββββββββββββββββββββββββββββββββββββββ
β ~/.opencrabs/memory/ β
β βββ 2026-02-15.md β Markdown files (daily logs)
β βββ 2026-02-16.md β
β βββ 2026-02-17.md β
ββββββββββββββββ¬βββββββββββββββββββββββ
β index on startup +
β after each compaction
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββ
β memory.db (SQLite WAL mode) β
β βββββββββββββββββββββββββ ββββββββββββββββββββ β
β β documents + FTS5 β β vector embeddingsβ β
β β (BM25, porter stem) β β (768-dim, cosine)β β
β βββββββββββββ¬ββββββββββββ ββββββββββ¬ββββββββββ β
ββββββββββββββββΌβββββββββββββββββββββββΌββββββββββββ
β MATCH query β cosine similarity
βΌ βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββ
β Reciprocal Rank Fusion (k=60) β
β Merges keyword + semantic results β
βββββββββββββββββββββββ¬ββββββββββββββββββββββββββββ
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββ
β Hybrid-ranked results with snippets β
βββββββββββββββββββββββββββββββββββββββββββββββββββ
Why local embeddings instead of OpenAI/cloud?
| Local (embeddinggemma-300M) | Cloud API (e.g. OpenAI) | |
|---|---|---|
| Cost | Free forever | ~$0.0001/query, adds up |
| Privacy | 100% local, nothing leaves your machine | Data sent to third party |
| Latency | ~2ms (in-process, no network) | 100-500ms (HTTP round-trip) |
| Offline | Works without internet | Requires internet |
| Setup | Automatic, no API key needed | Requires API key + billing |
| Quality | Excellent for code/session recall (768-dim) | Slightly better for general-purpose |
| Size | ~300 MB one-time download | N/A |
Tell OpenCrabs in natural language: "Create a /deploy command that runs deploy.sh" β and it writes the command to ~/.opencrabs/commands.toml via the config_manager tool:
[[commands]]
name = "/deploy"
description = "Deploy to staging server"
action = "prompt"
prompt = "Run the deployment script at ./src/scripts/deploy.sh for the staging environment."Commands appear in autocomplete alongside built-in commands. After each agent response, commands.toml is automatically reloaded β no restart needed. Legacy commands.json files are auto-migrated on first load.
OpenCrabs can modify its own source code, build, test, and hot-restart itself β triggered by the agent via the rebuild tool or by the user via /rebuild:
/rebuild # User-triggered: build β restart prompt
rebuild tool # Agent-triggered: build β ProgressEvent::RestartReady β restart prompt
How it works:
- The agent edits source files using its built-in tools (read, write, edit, bash)
SelfUpdater::build()runscargo build --releaseasynchronously- On success, a
ProgressEvent::RestartReadyis emitted β bridged toTuiEvent::RestartReady - The TUI switches to RestartPending mode β user presses Enter to confirm
SelfUpdater::restart(session_id)replaces the process via Unixexec()- The new binary starts with
opencrabs chat --session <uuid>β resuming the same conversation - A hidden wake-up message is sent to the agent so it greets the user and continues where it left off
Two trigger paths:
| Path | Entry point | Signal |
|---|---|---|
| Agent-triggered | rebuild tool (called by the agent after editing source) |
ProgressCallback β RestartReady |
| User-triggered | /rebuild slash command |
TuiEvent::RestartReady directly |
Key details:
- The running binary is in memory β source changes on disk don't affect it until restart
- If the build fails, the agent stays running and can read compiler errors to fix them
- Session persistence via SQLite means no conversation context is lost across restarts
- After restart, the agent auto-wakes with session context β no user input needed
- Brain files (
SOUL.md,MEMORY.md, etc.) are re-read every turn, so edits take effect immediately without rebuild - User-defined slash commands (
commands.toml) also auto-reload after each agent response - Hot restart is Unix-only (
exec()syscall); on Windows the build/test steps work but restart requires manual relaunch
Modules:
src/brain/self_update.rsβSelfUpdaterstruct withauto_detect(),build(),test(),restart()src/brain/tools/rebuild.rsβRebuildTool(agent-callable, emitsProgressEvent::RestartReady)
OpenCrabs learns from experience through three local mechanisms β no data ever leaves your machine:
1. Procedural memory β custom commands from experience
When the agent completes a complex workflow, overcomes errors, or follows user corrections, it can save that workflow as a reusable slash command via config_manager add_command. Next session, the command appears in autocomplete and the agent knows it exists.
2. Episodic memory β lessons learned
The agent writes important knowledge to ~/.opencrabs/ brain files as it works:
MEMORY.mdβ infrastructure details, troubleshooting patterns, architecture decisionsUSER.mdβ your preferences, communication style, project contextmemory/YYYY-MM-DD.mdβ daily logs of integrations, fixes, and decisions- Custom files (e.g.,
DEPLOY.md) β domain-specific knowledge
3. Cross-session recall β hybrid search
The memory_search and session_search tools use hybrid FTS5 + vector semantic search (Reciprocal Rank Fusion) to find relevant context from past sessions and memory files. Local embeddings via embeddinggemma-300M β no API calls needed.
Key difference from cloud-based "self-improving" agents: Your memory files, commands, and brain files are 100% local and belong to you. With local models (LM Studio, Ollama), everything stays on your machine. With cloud providers (Anthropic, MiniMax, OpenRouter), conversations go through their APIs β but these providers are privacy-first by default per their ToS, and you can opt out of logging and training data in their settings. Either way, your self-improvement data (skills, memory, commands) never leaves your machine.
OpenCrabs runs as a daemon on your machine β a persistent terminal agent that's always on. This makes scheduled tasks and background jobs native and trivial.
Cron jobs run as isolated sessions in the background. Each job gets its own session, provider, model, and context β completely independent from your main chat.
# Add a cron job via CLI
opencrabs cron add \
--name "morning-briefing" \
--cron "0 9 * * *" \
--tz "America/New_York" \
--prompt "Check my email, calendar, and weather. Send a morning briefing." \
--deliver telegram:123456789
# List all jobs
opencrabs cron list
# Enable/disable
opencrabs cron enable morning-briefing
opencrabs cron disable morning-briefing
# Remove
opencrabs cron remove morning-briefingThe agent can also create, list, and manage cron jobs autonomously via the cron_manage tool β from any channel:
"Set up a cron job that checks my Trello board every 2 hours and pings me on Telegram if any card is overdue"
| Option | Default | Description |
|---|---|---|
--cron |
required | Standard cron expression (e.g. "0 9 * * *") |
--tz |
UTC |
Timezone for the schedule |
--prompt |
required | The instruction to execute |
--provider |
current | Override provider (e.g. anthropic, gemini) |
--model |
current | Override model |
--thinking |
off |
Thinking mode: off, on, budget |
--auto-approve |
true |
Auto-approve tool calls (isolated sessions) |
--deliver |
none | Channel to deliver results (e.g. telegram:123456, discord:789, slack:C0123) |
When running as a daemon, OpenCrabs can perform periodic heartbeat checks. Configure HEARTBEAT.md in your workspace (~/.opencrabs/HEARTBEAT.md) with a checklist of things to monitor:
# Heartbeat Checklist
- Check for urgent unread emails
- Check calendar for events in the next 2 hours
- If anything needs attention, message me on Telegram
- Otherwise, reply HEARTBEAT_OKThe heartbeat prompt is loaded into the agent's brain every turn. When the heartbeat fires, the agent reads HEARTBEAT.md and acts on it β checking email, calendar, notifications, or whatever you've configured.
| Heartbeat | Cron Job | |
|---|---|---|
| Timing | Periodic (every N minutes) | Exact schedule (cron expression) |
| Session | Main session (shared context) | Isolated session (independent) |
| Context | Has conversation history | Fresh context each run |
| Use case | Batch periodic checks | Standalone scheduled tasks |
| Model | Current session model | Configurable per job |
| Cost | Single turn per cycle | Full session per run |
Rule of thumb: Use heartbeats for lightweight monitoring that benefits from conversation context. Use cron jobs for standalone tasks that need exact timing, different models, or isolation.
To keep OpenCrabs always running, set it to start automatically with your system.
mkdir -p ~/.config/systemd/user
cat > ~/.config/systemd/user/opencrabs.service << 'EOF'
[Unit]
Description=OpenCrabs AI Agent
After=network.target
[Service]
ExecStart=%h/.cargo/bin/opencrabs daemon
Restart=on-failure
RestartSec=5
Environment=OPENCRABS_HOME=%h/.opencrabs
[Install]
WantedBy=default.target
EOF
systemctl --user daemon-reload
systemctl --user enable opencrabs
systemctl --user start opencrabs
# Check status
systemctl --user status opencrabs
# View logs
journalctl --user -u opencrabs -fReplace
%h/.cargo/bin/opencrabswith the actual path to your binary if you installed it elsewhere (e.g./usr/local/bin/opencrabs).
cat > ~/Library/LaunchAgents/com.opencrabs.agent.plist << 'EOF'
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN"
"http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>Label</key>
<string>com.opencrabs.agent</string>
<key>ProgramArguments</key>
<array>
<string>/usr/local/bin/opencrabs</string>
<string>daemon</string>
</array>
<key>RunAtLoad</key>
<true/>
<key>KeepAlive</key>
<true/>
<key>StandardOutPath</key>
<string>/tmp/opencrabs.log</string>
<key>StandardErrorPath</key>
<string>/tmp/opencrabs.err</string>
</dict>
</plist>
EOF
launchctl load ~/Library/LaunchAgents/com.opencrabs.agent.plist
# Check status
launchctl list | grep opencrabs
# Stop and unload
launchctl unload ~/Library/LaunchAgents/com.opencrabs.agent.plistUpdate the path in
ProgramArgumentsto match your install. For cargo installs:~/.cargo/bin/opencrabs. For source builds:/path/to/target/release/opencrabs.
- Press
Win + R, typetaskschd.msc, hit Enter - Click Create Basic Task in the right panel
- Name:
OpenCrabs, Description:OpenCrabs AI Agent - Trigger: When I log on
- Action: Start a program
- Program:
C:\Users\<you>\.cargo\bin\opencrabs.exe(or wherever your binary lives) - Arguments:
daemon - Check Open the Properties dialog before finishing
- In Properties > Settings, check If the task fails, restart every 1 minute
Or via PowerShell:
$action = New-ScheduledTaskAction -Execute "$env:USERPROFILE\.cargo\bin\opencrabs.exe" -Argument "daemon"
$trigger = New-ScheduledTaskTrigger -AtLogon
$settings = New-ScheduledTaskSettingsSet -RestartCount 3 -RestartInterval (New-TimeSpan -Minutes 1)
Register-ScheduledTask -TaskName "OpenCrabs" -Action $action -Trigger $trigger -Settings $settings -Description "OpenCrabs AI Agent"All platforms: cron jobs, heartbeats, and channel listeners (Telegram, Discord, Slack, WhatsApp) work in daemon mode. The TUI is not needed for background operation.
Presentation Layer
β
CLI (Clap) + TUI (Ratatui + Crossterm)
β
Brain Layer (Dynamic system brain, user commands, config management, self-update)
β
Application Layer
β
Service Layer (Session, Message, Agent, Plan)
β
Data Access Layer (SQLx + SQLite)
β
Integration Layer (LLM Providers, LSP)
Key Technologies:
| Component | Crate |
|---|---|
| Async Runtime | Tokio |
| Terminal UI | Ratatui + Crossterm |
| CLI Parsing | Clap (derive) |
| Database | SQLx (SQLite) |
| Serialization | Serde + TOML |
| HTTP Client | Reqwest |
| Syntax Highlighting | Syntect |
| Markdown | pulldown-cmark |
| LSP Client | Tower-LSP |
| Provider Registry | Crabrace |
| Memory Search | qmd (FTS5 + vector embeddings) |
| Error Handling | anyhow + thiserror |
| Logging | tracing + tracing-subscriber |
| Security | zeroize |
opencrabs/
βββ src/
β βββ main.rs # Entry point
β βββ lib.rs # Library root (crate root β required by Rust)
β βββ error/ # Error types (OpenCrabsError, ErrorCode)
β βββ logging/ # Conditional logging system
β βββ app/ # Application lifecycle
β βββ brain/ # Intelligence layer β LLM providers, agent, tools, brain system
β β βββ agent/ # Agent service + context management
β β βββ provider/ # Provider implementations (Anthropic, GitHub Copilot, OpenAI-Compatible: OpenRouter, Minimax, Custom)
β β βββ tools/ # Tool system (read, write, bash, glob, grep, memory_search, etc.)
β β βββ tokenizer.rs # Token counting (tiktoken-based)
β β βββ prompt_builder.rs # BrainLoader β assembles system brain from workspace files
β β βββ commands.rs # CommandLoader β user-defined slash commands (TOML)
β β βββ self_update.rs # SelfUpdater β build, test, hot-restart via exec()
β βββ channels/ # Messaging integrations + voice (feature-gated)
β β βββ factory.rs # ChannelFactory β shared factory for channel agent services
β β βββ telegram/ # Telegram bot (agent, handler)
β β βββ whatsapp/ # WhatsApp Web client (agent, handler, store)
β β βββ discord/ # Discord bot (agent, handler)
β β βββ slack/ # Slack bot via Socket Mode (agent, handler)
β β βββ trello/ # Trello board poller (agent, client, handler, models)
β β βββ voice/ # STT (Groq Whisper / whisper.cpp) + TTS (OpenAI / Piper)
β βββ cli/ # Command-line interface (Clap)
β βββ config/ # Configuration (config.toml + keys.toml)
β βββ db/ # Database layer (SQLx + SQLite)
β βββ services/ # Business logic (Session, Message, File, Plan)
β βββ memory/ # Memory search (FTS5 + vector embeddings via qmd)
β βββ tui/ # Terminal UI (Ratatui)
β β βββ onboarding.rs # 8-step onboarding wizard (state + logic)
β β βββ onboarding_render.rs # Wizard rendering
β β βββ splash.rs # Splash screen
β β βββ app.rs # App state + event handling
β β βββ render.rs # Main render dispatch
β β βββ runner.rs # TUI event loop
β βββ utils/ # Utilities (retry, etc.)
β βββ migrations/ # SQLite migrations
β βββ tests/ # 1,420 tests (see TESTING.md)
β βββ benches/ # Criterion benchmarks
β βββ assets/ # Icons, screenshots, visual assets
β βββ scripts/ # Build and setup scripts
β βββ docs/ # Documentation templates
βββ Cargo.toml
βββ config.toml.example
βββ keys.toml.example
βββ LICENSE.md
# Development build
cargo build
# Release build (optimized, LTO, stripped)
cargo build --release
# Small release build
cargo build --profile release-small
# Run tests (1,420 tests across 60+ modules)
cargo test --all-features
# See TESTING.md for full test coverage documentation
# Run benchmarks
cargo bench
# Format + lint
cargo fmt
cargo clippy -- -D warnings| Feature | Description |
|---|---|
telegram |
Telegram bot integration (default: enabled) |
whatsapp |
WhatsApp Web integration (default: enabled) |
discord |
Discord bot integration (default: enabled) |
slack |
Slack bot integration (default: enabled) |
trello |
Trello board polling + card management (default: enabled) |
local-stt |
Local speech-to-text via rwhisper (candle-based, pure Rust) |
local-tts |
Local text-to-speech via Piper (requires python3) |
profiling |
Enable pprof flamegraph profiling (Unix only) |
| Metric | Value |
|---|---|
| Binary size | 34 MB (release, stripped, LTO) |
| RAM idle (RSS) | 57 MB |
| RAM active (100 msgs) | ~20 MB |
| Startup time | < 50 ms |
| Database ops | < 10 ms (session), < 5 ms (message) |
| Embedding engine | embeddinggemma-300M (~300 MB, local GGUF, auto-downloaded) |
Hybrid semantic search: FTS5 BM25 keyword matching + 768-dim vector embeddings combined via Reciprocal Rank Fusion. Embedding model runs locally β no API key, zero cost, works offline.
Benchmarked with cargo bench --bench memory on release builds:
| Operation | Time | Notes |
|---|---|---|
| Store open | 1.81 ms | Cold start (create DB + schema) |
| Index file | 214 Β΅s | Insert content + document |
| Hash skip | 19.5 Β΅s | Already indexed, unchanged β fast path |
| FTS search (10 docs) | 397 Β΅s | 2-term BM25 query |
| FTS search (50 docs) | 2.57 ms | Typical user corpus |
| FTS search (100 docs) | 9.22 ms | |
| FTS search (500 docs) | 88.1 ms | Large corpus |
| Vector search (10 docs) | 247 Β΅s | 768-dim cosine similarity |
| Vector search (50 docs) | 1.02 ms | 768-dim cosine similarity |
| Vector search (100 docs) | 2.04 ms | 768-dim cosine similarity |
| Hybrid RRF (50 docs) | 3.49 ms | FTS + vector β Reciprocal Rank Fusion |
| Insert embedding | 301 Β΅s | Single 768-dim vector |
| Bulk reindex (50 files) | 11.4 ms | From cold, includes store open |
| Deactivate document | 267 Β΅s | Prune a single entry |
Benchmarks (release build, in-memory SQLite, criterion):
| Operation | Time |
|---|---|
| Index 50 files (first run) | 11.4 ms |
| Per-file index | 214 Β΅s |
| Hash skip (unchanged file) | 19.5 Β΅s |
| FTS search (10 docs) | 397 Β΅s |
| FTS search (50 docs) | 2.57 ms |
| FTS search (100 docs) | 9.2 ms |
| Vector search (10 docs, 768-dim) | 247 Β΅s |
| Vector search (50 docs, 768-dim) | 1.02 ms |
| Vector search (100 docs, 768-dim) | 2.04 ms |
| Hybrid RRF (FTS + vector, 50 docs) | 3.49 ms |
| Insert embedding | 301 Β΅s |
| Deactivate document | 267 Β΅s |
Automated setup: Run
src/scripts/setup.shto detect your platform and install everything automatically.
# Debian/Ubuntu
sudo apt-get install build-essential pkg-config libssl-dev cmake
# Fedora/RHEL
sudo dnf install gcc gcc-c++ make pkg-config openssl-devel cmake
# Arch
sudo pacman -S base-devel pkg-config openssl cmakeThe default release binary requires AVX2 (Haswell 2013+). If you have an older CPU with only AVX support (Sandy Bridge/Ivy Bridge, 2011-2012), build from source with:
RUSTFLAGS="-C target-cpu=native" cargo build --releasePre-built *-compat binaries are also available on the releases page for AVX-only CPUs. If your CPU lacks AVX entirely (pre-2011), vector embeddings are disabled and search falls back to FTS-only keyword matching.
Requires macOS 15 (Sequoia) or later.
# Install build dependencies
brew install cmake pkg-configIf you see this error when running OpenCrabs:
dyld: Symbol not found: _OBJC_CLASS_$_MTLResidencySetDescriptor
This happens because llama.cpp (used for local embeddings) compiles with Metal GPU support and unconditionally links Metal frameworks that require macOS 15+. There is currently no way to disable Metal at build time through the Rust llama-cpp-sys-2 crate.
Fix: Update to macOS 15 (Sequoia) or later.
Requires CMake, NASM, and Visual Studio Build Tools for native crypto dependencies:
# Option 1: Install build tools
# - CMake (add to PATH)
# - NASM (add to PATH)
# - Visual Studio Build Tools ("Desktop development with C++")
# Option 2: Use WSL2 (recommended)
sudo apt-get install build-essential pkg-config libssl-devSee BUILD_NOTES.md for detailed troubleshooting.
If the agent starts sending tool call approvals that don't render in the UI β meaning it believes it executed actions that never actually ran β the session context has become corrupted.
Fix: Start a new session.
- Press
/and typesessions(or navigate to the Sessions panel) - Press N to create a new session
- Continue your work in the fresh session
This reliably resolves the issue. A fix is coming in a future release.
Windows Defender (or other antivirus software) may flag opencrabs.exe as suspicious because it's an unsigned binary that executes shell commands and makes network requests. This is a false positive.
Fix β Add an exclusion:
- Open Windows Security β Virus & threat protection
- Scroll to Virus & threat protection settings β Manage settings
- Scroll to Exclusions β Add or remove exclusions
- Click Add an exclusion β File β select
opencrabs.exe
Or via PowerShell (admin):
Add-MpPreference -ExclusionPath "C:\path\to\opencrabs.exe"If SmartScreen blocks the first run, click More info β Run anyway.
WhisperCrabs is a floating voice-to-text tool. Click to record, click to stop, transcribes, copies to clipboard.
- Local (whisper.cpp, on-device) or API transcription
- Fully controllable via D-Bus β start/stop recording, switch providers, view history
- Works as an OpenCrabs tool: use D-Bus to control WhisperCrabs from the agent
SocialCrabs automates social media via CLI + GraphQL with human-like behavior simulation. Twitter/X, Instagram, LinkedIn. No browser needed for read operations.
Setup:
git clone https://github.com/adolfousier/socialcrabs.git
cd socialcrabs && npm install && npm run build
# Add cookies from browser DevTools to .env (auth_token + ct0 for Twitter)
# See SocialCrabs README for per-platform credential setup
node dist/cli.js session login x # Authenticate Twitter/X
node dist/cli.js session login ig # Authenticate Instagram
node dist/cli.js session login linkedin # Authenticate LinkedIn
node dist/cli.js session status # Check all sessionsUsage with OpenCrabs: Just ask naturally. OpenCrabs calls SocialCrabs CLI commands via bash automatically:
"Check my Twitter mentions" / "Search LinkedIn for AI founders" / "Post this to X"
Read operations run automatically. Write operations (tweet, like, follow, comment, DM) always ask for your approval first.
Twitter/X commands:
node dist/cli.js x whoami # Check logged-in account
node dist/cli.js x mentions -n 5 # Your mentions
node dist/cli.js x home -n 5 # Your timeline
node dist/cli.js x search "query" -n 10 # Search tweets
node dist/cli.js x read <tweet-url> # Read a specific tweet
node dist/cli.js x tweet "Hello world" # Post a tweet
node dist/cli.js x reply <tweet-url> "text" # Reply to tweet
node dist/cli.js x like <tweet-url> # Like a tweet
node dist/cli.js x follow <username> # Follow a userInstagram commands:
node dist/cli.js ig like <post-url>
node dist/cli.js ig comment <post-url> "text"
node dist/cli.js ig dm <username> "message"
node dist/cli.js ig follow <username>
node dist/cli.js ig followers <username> -n 10
node dist/cli.js ig posts <username> -n 3LinkedIn commands:
node dist/cli.js linkedin like <post-url>
node dist/cli.js linkedin comment <post-url> "text"
node dist/cli.js linkedin connect <profile-url>
node dist/cli.js linkedin search "query" -n 10
node dist/cli.js linkedin engage --query="query" # Full engagement sessionFeatures: Human-like behavior (randomized delays, natural typing), session persistence across restarts, built-in rate limiting, anti-detection, research-first workflow (scrape targets first, distribute engagement over time).
OpenCrabs is under active development. While functional, it may contain bugs or incomplete features.
You are responsible for monitoring and managing your own API usage and costs.
- API costs from cloud providers (Anthropic, OpenAI, etc.) are your responsibility
- Set billing alerts with your provider
- Consider local LLMs for cost-free operation
- Use the built-in cost tracker to monitor spending
Cloud API issues, billing questions, and account problems should be directed to the respective providers. OpenCrabs provides the tool; you manage your API relationships.
Contributions welcome! Please read CONTRIBUTING.md for guidelines.
# Setup
git clone https://github.com/adolfousier/opencrabs.git
cd opencrabs
cargo build
cargo test
# Make changes, then submit a PRMIT License β See LICENSE.md for details.
- Claude Code β Inspiration
- Crabrace β Provider registry
- Ratatui β Terminal UI framework
- Anthropic β Claude API
- Issues: GitHub Issues
- Discussions: GitHub Discussions
- Docs: Documentation
Built with Rust π¦ by Adolfo Usier