Skip to content
forked from HKUDS/nanobot

"🐈 nanobot: The Ultra-Lightweight Clawdbot"

License

Notifications You must be signed in to change notification settings

yuritorres/nanobot

Β 
Β 

Repository files navigation

nanobot

nanobot: Ultra-Lightweight Personal AI Assistant

PyPI Downloads Python License Feishu WeChat Discord

🐈 nanobot is an ultra-lightweight personal AI assistant inspired by Clawdbot

⚑️ Delivers core agent functionality in just ~4,000 lines of code β€” 99% smaller than Clawdbot's 430k+ lines.

πŸ“’ News

  • 2026-02-01 πŸŽ‰ nanobot launched! Welcome to try 🐈 nanobot!

Key Features of nanobot:

πŸͺΆ Ultra-Lightweight: Just ~4,000 lines of code β€” 99% smaller than Clawdbot - core functionality.

πŸ”¬ Research-Ready: Clean, readable code that's easy to understand, modify, and extend for research.

⚑️ Lightning Fast: Minimal footprint means faster startup, lower resource usage, and quicker iterations.

πŸ’Ž Easy-to-Use: One-click to depoly and you're ready to go.

πŸ—οΈ Architecture

nanobot architecture

✨ Features

πŸ“ˆ 24/7 Real-Time Market Analysis

πŸš€ Full-Stack Software Engineer

πŸ“… Smart Daily Routine Manager

πŸ“š Personal Knowledge Assistant

Discovery β€’ Insights β€’ Trends Develop β€’ Deploy β€’ Scale Schedule β€’ Automate β€’ Organize Learn β€’ Memory β€’ Reasoning

πŸ’° Token Usage Tracking & Budget Monitoring: Track LLM API consumption, set budget limits, receive alerts when approaching or exceeding thresholds.

πŸ“¦ Install

Install from source (latest features, recommended for development)

git clone https://github.com/HKUDS/nanobot.git
cd nanobot
pip install -e .

Install with uv (stable, fast)

uv tool install nanobot-ai

Install from PyPI (stable)

pip install nanobot-ai

Install with Docker

# Build the Docker image
docker build -t nanobot .

# Or use docker-compose
docker-compose build

Note

For development/updates: If you modify the nanobot code, rebuild the image to include changes:

docker build --no-cache -t nanobot .

πŸš€ Quick Start

Tip

Set your API key in ~/.nanobot/config.json. Get API keys: OpenRouter (LLM) Β· Brave Search (optional, for web search) Β· NVIDIA (for Kimi AI tool) You can also change the model to minimax/minimax-m2 for lower cost.

1. Initialize

nanobot onboard

2. Configure (~/.nanobot/config.json)

{
  "providers": {
    "openrouter": {
      "apiKey": "sk-or-v1-xxx"
    },
    "nvidia": {
      "apiKey": "nvapi-xxx"
    }
  },
  "agents": {
    "defaults": {
      "model": "anthropic/claude-opus-4-5"
    }
  },
  "webSearch": {
    "apiKey": "BSA-xxx"
  }
}

3. Chat

nanobot agent -m "What is 2+2?"

That's it! You have a working AI assistant in 2 minutes.

πŸ¦™ Local Models (Ollama)

Run nanobot with local Ollama models for privacy and zero-cost inference.

1. Install Ollama

# macOS/Linux
curl -fsSL https://ollama.ai/install.sh | sh

# Windows
# Download from: https://ollama.ai/download

2. Configure (~/.nanobot/config.json)

{
  "ollama": {
    "enabled": true,
    "apiBase": "http://localhost:11434",
    "model": "llama3.2",
    "timeout": 120.0
  }
}
**2. Configure nanobot**

```bash
# Regular installation
nanobot onboard  # Edit ~/.nanobot/config.json to enable ollama

# Docker
docker run -it -v ~/.nanobot:/root/.nanobot nanobot onboard
# Then edit ~/.nanobot/config.json to enable ollama

3. Pull a model

# Regular installation
nanobot ollama pull llama3.2
# Or manually: ollama pull llama3.2

# Docker
docker run -it nanobot ollama pull llama3.2

4. Check status

# Regular installation
nanobot ollama status
nanobot ollama list

# Docker
docker run -it nanobot ollama status
docker run -it nanobot ollama list

5. Chat

nanobot agent -m "Hello from local LLM!"

🐳 Docker Usage

For Docker users, use these commands instead:

# Check Ollama status
docker-compose run --rm nanobot ollama status

# List available models  
docker-compose run --rm nanobot ollama list

# Pull new models
docker-compose run --rm nanobot ollama pull llama3.2

# Chat with Ollama models
docker-compose run --rm nanobot agent -m "Hello from local LLM!"

Tip

Popular models: llama3.2, mistral, codellama, llama3.1:8b Ollama models run locally with zero API costs!

πŸ–₯️ Local Models (vLLM)

Run nanobot with your own local models using vLLM or any OpenAI-compatible server.

1. Start your vLLM server

vllm serve meta-llama/Llama-3.1-8B-Instruct --port 8000

2. Configure (~/.nanobot/config.json)

{
  "providers": {
    "vllm": {
      "apiKey": "dummy",
      "apiBase": "http://localhost:8000/v1"
    }
  },
  "agents": {
    "defaults": {
      "model": "meta-llama/Llama-3.1-8B-Instruct"
    }
  }
}

3. Chat

nanobot agent -m "Hello from my local LLM!"

Tip

The apiKey can be any non-empty string for local servers that don't require authentication.

πŸ’¬ Chat Apps

Talk to your nanobot through Telegram or WhatsApp β€” anytime, anywhere.

Channel Setup
Telegram Easy (just a token)
WhatsApp Medium (scan QR)
Telegram (Recommended)

1. Create a bot

  • Open Telegram, search @BotFather
  • Send /newbot, follow prompts
  • Copy the token

2. Configure

{
  "channels": {
    "telegram": {
      "enabled": true,
      "token": "YOUR_BOT_TOKEN",
      "allowFrom": ["YOUR_USER_ID"]
    }
  }
}

Get your user ID from @userinfobot on Telegram.

3. Run

nanobot gateway
WhatsApp

Requires Node.js β‰₯18.

1. Link device

nanobot channels login
# Scan QR with WhatsApp β†’ Settings β†’ Linked Devices

2. Configure

{
  "channels": {
    "whatsapp": {
      "enabled": true,
      "allowFrom": ["+1234567890"]
    }
  }
}

3. Run (two terminals)

# Terminal 1
nanobot channels login

# Terminal 2
nanobot gateway

βš™οΈ Configuration

Note

Environment Variables: Some providers may require additional environment variables. For NVIDIA, the apiKey is configured in config.json. Config file: ~/.nanobot/config.json

Providers

Note

Groq provides free voice transcription via Whisper. If configured, Telegram voice messages will be automatically transcribed.

Provider Purpose Get API Key
openrouter LLM (recommended, access to all models) openrouter.ai
anthropic LLM (Claude direct) console.anthropic.com
openai LLM (GPT direct) platform.openai.com
groq LLM + Voice transcription (Whisper) console.groq.com
gemini LLM (Gemini direct) aistudio.google.com
Full config example
{
  "agents": {
    "defaults": {
      "model": "anthropic/claude-opus-4-5"
    }
  },
  "providers": {
    "openrouter": {
      "apiKey": "sk-or-v1-xxx"
    },
    "nvidia": {
      "apiKey": "nvapi-xxx"
    "groq": {
      "apiKey": "gsk_xxx"
    }
  },
  "channels": {
    "telegram": {
      "enabled": true,
      "token": "123456:ABC...",
      "allowFrom": ["123456789"]
    },
    "whatsapp": {
      "enabled": false
    }
  },
  "tools": {
    "web": {
      "search": {
        "apiKey": "BSA..."
      }
    }
  },
  "usage": {
    "monthlyBudgetUsd": 20.0,
    "alertThresholds": [0.5, 0.8, 1.0]
  },
  "ollama": {
    "enabled": true,
    "apiBase": "http://localhost:11434",
    "model": "llama3.2",
    "timeout": 120.0
  }
}

CLI Reference

Command Description
nanobot onboard Initialize config & workspace
nanobot agent -m "..." Chat with the agent
nanobot agent Interactive chat mode
nanobot usage Show token usage & budget stats
nanobot gateway Start the gateway
nanobot status Show status
nanobot channels login Link WhatsApp (scan QR)
nanobot channels status Show channel status
nanobot ollama status Check Ollama service status
nanobot ollama list List installed Ollama models
nanobot ollama pull <model> Download an Ollama model
Scheduled Tasks (Cron)
# Add a job
nanobot cron add --name "daily" --message "Good morning!" --cron "0 9 * * *"
nanobot cron add --name "hourly" --message "Check status" --every 3600

# List jobs
nanobot cron list

# Remove a job
nanobot cron remove <job_id>

🐳 Docker

Tip

The -v ~/.nanobot:/root/.nanobot flag mounts your local config directory into the container, so your config and workspace persist across container restarts.

Build and run nanobot in a container:

# Build the image
docker build -t nanobot .

# Initialize config (first time only)
docker run -v ~/.nanobot:/root/.nanobot --rm nanobot onboard

# Edit config on host to add API keys
vim ~/.nanobot/config.json

# Run gateway (connects to Telegram/WhatsApp)
docker run -v ~/.nanobot:/root/.nanobot -p 18790:18790 nanobot gateway

# Or run a single command
docker run -v ~/.nanobot:/root/.nanobot --rm nanobot agent -m "Hello!"
docker run -v ~/.nanobot:/root/.nanobot --rm nanobot status

πŸ“ Project Structure

nanobot/
β”œβ”€β”€ agent/          # 🧠 Core agent logic
β”‚   β”œβ”€β”€ loop.py     #    Agent loop (LLM ↔ tool execution)
β”‚   β”œβ”€β”€ context.py  #    Prompt builder
β”‚   β”œβ”€β”€ memory.py   #    Persistent memory
β”‚   β”œβ”€β”€ skills.py   #    Skills loader
β”‚   β”œβ”€β”€ subagent.py #    Background task execution
β”‚   └── tools/      #    Built-in tools (incl. spawn)
β”œβ”€β”€ skills/         # 🎯 Bundled skills (github, weather, tmux...)
β”œβ”€β”€ channels/       # πŸ“± WhatsApp integration
β”œβ”€β”€ bus/            # 🚌 Message routing
β”œβ”€β”€ cron/           # ⏰ Scheduled tasks
β”œβ”€β”€ heartbeat/      # πŸ’“ Proactive wake-up
β”œβ”€β”€ providers/      # πŸ€– LLM providers (OpenRouter, etc.)
β”œβ”€β”€ session/        # πŸ’¬ Conversation sessions
β”œβ”€β”€ config/         # βš™οΈ Configuration
└── cli/            # πŸ–₯️ Commands

🀝 Contribute & Roadmap

PRs welcome! The codebase is intentionally small and readable. πŸ€—

Roadmap β€” Pick an item and open a PR!

  • Voice Transcription β€” Support for Groq Whisper (Issue #13)
  • Multi-modal β€” See and hear (images, voice, video)
  • Long-term memory β€” Never forget important context
  • Better reasoning β€” Multi-step planning and reflection
  • More integrations β€” Discord, Slack, email, calendar
  • Self-improvement β€” Learn from feedback and mistakes

Contributors

⭐ Star History

Thanks for visiting ✨ nanobot!

Views

nanobot is for educational, research, and technical exchange purposes only

About

"🐈 nanobot: The Ultra-Lightweight Clawdbot"

Resources

License

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 93.3%
  • TypeScript 3.1%
  • Shell 2.3%
  • Other 1.3%