🐈 MetalClaw is an ultra-lightweight personal AI assistant inspired by OpenClaw and forked from nanobot
⚡️ Delivers core agent functionality in just ~4,000 lines of code — 99% smaller than Clawdbot's 430k+ lines.
🔥 NEW: Seamless integration with Agent Zero — powerful multi-agent AI automation!
- 2026-02-20 🔌 MetalClaw now integrates with n8n! — seamless REST API connection to n8n workflow automation platform. See n8n Integration for setup instructions.
- 2026-02-19 🎉 MetalClaw now integrates with Agent Zero! — seamless HTTP API connection to the powerful multi-agent AI automation platform. See Agent Zero Integration for setup instructions.
- 2026-02-16 🦞 nanobot now integrates a ClawHub skill — search and install public agent skills.
- 2026-02-15 🔑 nanobot now supports OpenAI Codex provider with OAuth login support.
- 2026-02-14 🔌 nanobot now supports MCP! See MCP section for details.
- 2026-02-13 🎉 Released v0.1.3.post7 — includes security hardening and multiple improvements. All users are recommended to upgrade to the latest version. See release notes for more details.
- 2026-02-12 🧠 Redesigned memory system — Less code, more reliable. Join the discussion about it!
- 2026-02-11 ✨ Enhanced CLI experience and added MiniMax support!
- 2026-02-10 🎉 Released v0.1.3.post6 with improvements! Check the updates notes and our roadmap.
- 2026-02-09 💬 Added Slack, Email, and QQ support — nanobot now supports multiple chat platforms!
- 2026-02-08 🔧 Refactored Providers—adding a new LLM provider now takes just 2 simple steps! Check here.
- 2026-02-07 🚀 Released v0.1.3.post5 with Qwen support & several key improvements! Check here for details.
- 2026-02-06 ✨ Added Moonshot/Kimi provider, Discord integration, and enhanced security hardening!
- 2026-02-05 ✨ Added Feishu channel, DeepSeek provider, and enhanced scheduled tasks support!
- 2026-02-04 🚀 Released v0.1.3.post4 with multi-provider & Docker support! Check here for details.
- 2026-02-03 ⚡ Integrated vLLM for local LLM support and improved natural language task scheduling!
- 2026-02-02 🎉 nanobot officially launched! Welcome to try 🐈 nanobot!
🪶 Ultra-Lightweight: Just ~4,000 lines of core agent code — 99% smaller than Clawdbot.
🔬 Research-Ready: Clean, readable code that's easy to understand, modify, and extend for research.
⚡️ Lightning Fast: Minimal footprint means faster startup, lower resource usage, and quicker iterations.
💎 Easy-to-Use: One-click to deploy and you're ready to go.
🔥 Agent Zero Integration: Seamlessly connect to Agent Zero for advanced multi-agent automation, web browsing, and code execution capabilities.
📈 24/7 Real-Time Market Analysis |
🚀 Full-Stack Software Engineer |
📅 Smart Daily Routine Manager |
📚 Personal Knowledge Assistant |
|---|---|---|---|
| Discovery • Insights • Trends | Develop • Deploy • Scale | Schedule • Automate • Organize | Learn • Memory • Reasoning |
Install from source (latest features, recommended for development)
git clone https://github.com/JunSuzuki1973/MetalClaw.git
cd MetalClaw
pip install -e .Note: MetalClaw is a fork of nanobot by HKUDS. We extend it with Agent Zero integration and other enhancements.
Tip
Set your API key in ~/.nanobot/config.json.
Get API keys: OpenRouter (Global) · Brave Search (optional, for web search)
1. Initialize
nanobot onboard2. Configure (~/.nanobot/config.json)
Add or merge these two parts into your config (other options have defaults).
Set your API key (e.g. OpenRouter, recommended for global users):
{
"providers": {
"openrouter": {
"apiKey": "sk-or-v1-xxx"
}
}
}Set your model:
{
"agents": {
"defaults": {
"model": "anthropic/claude-opus-4-5"
}
}
}3. Chat
nanobot agentThat's it! You have a working AI assistant in 2 minutes.
Connect nanobot to your favorite chat platform.
| Channel | What you need |
|---|---|
| Telegram | Bot token from @BotFather |
| Discord | Bot token + Message Content intent |
| QR code scan | |
| Feishu | App ID + App Secret |
| Mochat | Claw token (auto-setup available) |
| DingTalk | App Key + App Secret |
| Slack | Bot token + App-Level token |
| IMAP/SMTP credentials | |
| App ID + App Secret |
Telegram (Recommended)
1. Create a bot
- Open Telegram, search
@BotFather - Send
/newbot, follow prompts - Copy the token
2. Configure
{
"channels": {
"telegram": {
"enabled": true,
"token": "YOUR_BOT_TOKEN",
"allowFrom": ["YOUR_USER_ID"]
}
}
}You can find your User ID in Telegram settings. It is shown as
@yourUserId. Copy this value without the@symbol and paste it into the config file.
3. Run
nanobot gatewayMochat (Claw IM)
Uses Socket.IO WebSocket by default, with HTTP polling fallback.
1. Ask nanobot to set up Mochat for you
Simply send this message to nanobot (replace xxx@xxx with your real email):
Read https://raw.githubusercontent.com/HKUDS/MoChat/refs/heads/main/skills/nanobot/skill.md and register on MoChat. My Email account is xxx@xxx Bind me as your owner and DM me on MoChat.
nanobot will automatically register, configure ~/.nanobot/config.json, and connect to Mochat.
2. Restart gateway
nanobot gatewayThat's it — nanobot handles the rest!
Manual configuration (advanced)
If you prefer to configure manually, add the following to ~/.nanobot/config.json:
Keep
claw_tokenprivate. It should only be sent inX-Claw-Tokenheader to your Mochat API endpoint.
{
"channels": {
"mochat": {
"enabled": true,
"base_url": "https://mochat.io",
"socket_url": "https://mochat.io",
"socket_path": "/socket.io",
"claw_token": "claw_xxx",
"agent_user_id": "6982abcdef",
"sessions": ["*"],
"panels": ["*"],
"reply_delay_mode": "non-mention",
"reply_delay_ms": 120000
}
}
}Discord
1. Create a bot
- Go to https://discord.com/developers/applications
- Create an application → Bot → Add Bot
- Copy the bot token
2. Enable intents
- In the Bot settings, enable MESSAGE CONTENT INTENT
- (Optional) Enable SERVER MEMBERS INTENT if you plan to use allow lists based on member data
3. Get your User ID
- Discord Settings → Advanced → enable Developer Mode
- Right-click your avatar → Copy User ID
4. Configure
{
"channels": {
"discord": {
"enabled": true,
"token": "YOUR_BOT_TOKEN",
"allowFrom": ["YOUR_USER_ID"]
}
}
}5. Invite the bot
- OAuth2 → URL Generator
- Scopes:
bot - Bot Permissions:
Send Messages,Read Message History - Open the generated invite URL and add the bot to your server
6. Run
nanobot gatewayRequires Node.js ≥18.
1. Link device
nanobot channels login
# Scan QR with WhatsApp → Settings → Linked Devices2. Configure
{
"channels": {
"whatsapp": {
"enabled": true,
"allowFrom": ["+1234567890"]
}
}
}3. Run (two terminals)
# Terminal 1
nanobot channels login
# Terminal 2
nanobot gatewayFeishu (飞书)
Uses WebSocket long connection — no public IP required.
1. Create a Feishu bot
- Visit Feishu Open Platform
- Create a new app → Enable Bot capability
- Permissions: Add
im:message(send messages) - Events: Add
im.message.receive_v1(receive messages)- Select Long Connection mode (requires running nanobot first to establish connection)
- Get App ID and App Secret from "Credentials & Basic Info"
- Publish the app
2. Configure
{
"channels": {
"feishu": {
"enabled": true,
"appId": "cli_xxx",
"appSecret": "xxx",
"encryptKey": "",
"verificationToken": "",
"allowFrom": []
}
}
}
encryptKeyandverificationTokenare optional for Long Connection mode.allowFrom: Leave empty to allow all users, or add["ou_xxx"]to restrict access.
3. Run
nanobot gateway[!TIP] Feishu uses WebSocket to receive messages — no webhook or public IP needed!
QQ (QQ单聊)
Uses botpy SDK with WebSocket — no public IP required. Currently supports private messages only.
1. Register & create bot
- Visit QQ Open Platform → Register as a developer (personal or enterprise)
- Create a new bot application
- Go to 开发设置 (Developer Settings) → copy AppID and AppSecret
2. Set up sandbox for testing
- In the bot management console, find 沙箱配置 (Sandbox Config)
- Under 在消息列表配置, click 添加成员 and add your own QQ number
- Once added, scan the bot's QR code with mobile QQ → open the bot profile → tap "发消息" to start chatting
3. Configure
allowFrom: Leave empty for public access, or add user openids to restrict. You can find openids in the nanobot logs when a user messages the bot.- For production: submit a review in the bot console and publish. See QQ Bot Docs for the full publishing flow.
{
"channels": {
"qq": {
"enabled": true,
"appId": "YOUR_APP_ID",
"secret": "YOUR_APP_SECRET",
"allowFrom": []
}
}
}4. Run
nanobot gatewayNow send a message to the bot from QQ — it should respond!
DingTalk (钉钉)
Uses Stream Mode — no public IP required.
1. Create a DingTalk bot
- Visit DingTalk Open Platform
- Create a new app -> Add Robot capability
- Configuration:
- Toggle Stream Mode ON
- Permissions: Add necessary permissions for sending messages
- Get AppKey (Client ID) and AppSecret (Client Secret) from "Credentials"
- Publish the app
2. Configure
{
"channels": {
"dingtalk": {
"enabled": true,
"clientId": "YOUR_APP_KEY",
"clientSecret": "YOUR_APP_SECRET",
"allowFrom": []
}
}
}
allowFrom: Leave empty to allow all users, or add["staffId"]to restrict access.
3. Run
nanobot gatewaySlack
Uses Socket Mode — no public URL required.
1. Create a Slack app
- Go to Slack API → Create New App → "From scratch"
- Pick a name and select your workspace
2. Configure the app
- Socket Mode: Toggle ON → Generate an App-Level Token with
connections:writescope → copy it (xapp-...) - OAuth & Permissions: Add bot scopes:
chat:write,reactions:write,app_mentions:read - Event Subscriptions: Toggle ON → Subscribe to bot events:
message.im,message.channels,app_mention→ Save Changes - App Home: Scroll to Show Tabs → Enable Messages Tab → Check "Allow users to send Slash commands and messages from the messages tab"
- Install App: Click Install to Workspace → Authorize → copy the Bot Token (
xoxb-...)
3. Configure nanobot
{
"channels": {
"slack": {
"enabled": true,
"botToken": "xoxb-...",
"appToken": "xapp-...",
"groupPolicy": "mention"
}
}
}4. Run
nanobot gatewayDM the bot directly or @mention it in a channel — it should respond!
[!TIP]
groupPolicy:"mention"(default — respond only when @mentioned),"open"(respond to all channel messages), or"allowlist"(restrict to specific channels).- DM policy defaults to open. Set
"dm": {"enabled": false}to disable DMs.
Give nanobot its own email account. It polls IMAP for incoming mail and replies via SMTP — like a personal email assistant.
1. Get credentials (Gmail example)
- Create a dedicated Gmail account for your bot (e.g.
my-nanobot@gmail.com) - Enable 2-Step Verification → Create an App Password
- Use this app password for both IMAP and SMTP
2. Configure
consentGrantedmust betrueto allow mailbox access. This is a safety gate — setfalseto fully disable.allowFrom: Leave empty to accept emails from anyone, or restrict to specific senders.smtpUseTlsandsmtpUseSsldefault totrue/falserespectively, which is correct for Gmail (port 587 + STARTTLS). No need to set them explicitly.- Set
"autoReplyEnabled": falseif you only want to read/analyze emails without sending automatic replies.
{
"channels": {
"email": {
"enabled": true,
"consentGranted": true,
"imapHost": "imap.gmail.com",
"imapPort": 993,
"imapUsername": "my-nanobot@gmail.com",
"imapPassword": "your-app-password",
"smtpHost": "smtp.gmail.com",
"smtpPort": 587,
"smtpUsername": "my-nanobot@gmail.com",
"smtpPassword": "your-app-password",
"fromAddress": "my-nanobot@gmail.com",
"allowFrom": ["your-real-email@gmail.com"]
}
}
}3. Run
nanobot gateway🐈 nanobot is capable of linking to the agent social network (agent community). Just send one message and your nanobot joins automatically!
| Platform | How to Join (send this message to your bot) |
|---|---|
| Moltbook | Read https://moltbook.com/skill.md and follow the instructions to join Moltbook |
| ClawdChat | Read https://clawdchat.ai/skill.md and follow the instructions to join ClawdChat |
Simply send the command above to your nanobot (via CLI or any chat channel), and it will handle the rest.
Config file: ~/.nanobot/config.json
Tip
- Groq provides free voice transcription via Whisper. If configured, Telegram voice messages will be automatically transcribed.
- Zhipu Coding Plan: If you're on Zhipu's coding plan, set
"apiBase": "https://open.bigmodel.cn/api/coding/paas/v4"in your zhipu provider config. - MiniMax (Mainland China): If your API key is from MiniMax's mainland China platform (minimaxi.com), set
"apiBase": "https://api.minimaxi.com/v1"in your minimax provider config.
| Provider | Purpose | Get API Key |
|---|---|---|
custom |
Any OpenAI-compatible endpoint | — |
openrouter |
LLM (recommended, access to all models) | openrouter.ai |
anthropic |
LLM (Claude direct) | console.anthropic.com |
openai |
LLM (GPT direct) | platform.openai.com |
deepseek |
LLM (DeepSeek direct) | platform.deepseek.com |
groq |
LLM + Voice transcription (Whisper) | console.groq.com |
gemini |
LLM (Gemini direct) | aistudio.google.com |
minimax |
LLM (MiniMax direct) | platform.minimax.io |
aihubmix |
LLM (API gateway, access to all models) | aihubmix.com |
dashscope |
LLM (Qwen) | dashscope.console.aliyun.com |
moonshot |
LLM (Moonshot/Kimi) | platform.moonshot.cn |
zhipu |
LLM (Zhipu GLM) | open.bigmodel.cn |
vllm |
LLM (local, any OpenAI-compatible server) | — |
openai_codex |
LLM (Codex, OAuth) | nanobot provider login openai-codex |
github_copilot |
LLM (GitHub Copilot, OAuth) | Requires GitHub Copilot subscription |
OpenAI Codex (OAuth)
Codex uses OAuth instead of API keys. Requires a ChatGPT Plus or Pro account.
1. Login:
nanobot provider login openai-codex2. Set model (merge into ~/.nanobot/config.json):
{
"agents": {
"defaults": {
"model": "openai-codex/gpt-5.1-codex"
}
}
}3. Chat:
nanobot agent -m "Hello!"Docker users: use
docker run -itfor interactive OAuth login.
Custom Provider (Any OpenAI-compatible API)
If your provider is not listed above but exposes an OpenAI-compatible API (e.g. Together AI, Fireworks, Azure OpenAI, self-hosted endpoints), use the custom provider:
{
"providers": {
"custom": {
"apiKey": "your-api-key",
"apiBase": "https://api.your-provider.com/v1"
}
},
"agents": {
"defaults": {
"model": "your-model-name"
}
}
}The
customprovider routes through LiteLLM's OpenAI-compatible path. It works with any endpoint that follows the OpenAI chat completions API format. The model name is passed directly to the endpoint without any prefix.
vLLM (local / OpenAI-compatible)
Run your own model with vLLM or any OpenAI-compatible server, then add to config:
1. Start the server (example):
vllm serve meta-llama/Llama-3.1-8B-Instruct --port 80002. Add to config (partial — merge into ~/.nanobot/config.json):
Provider (key can be any non-empty string for local):
{
"providers": {
"vllm": {
"apiKey": "dummy",
"apiBase": "http://localhost:8000/v1"
}
}
}Model:
{
"agents": {
"defaults": {
"model": "meta-llama/Llama-3.1-8B-Instruct"
}
}
}Adding a New Provider (Developer Guide)
nanobot uses a Provider Registry (nanobot/providers/registry.py) as the single source of truth.
Adding a new provider only takes 2 steps — no if-elif chains to touch.
Step 1. Add a ProviderSpec entry to PROVIDERS in nanobot/providers/registry.py:
ProviderSpec(
name="myprovider", # config field name
keywords=("myprovider", "mymodel"), # model-name keywords for auto-matching
env_key="MYPROVIDER_API_KEY", # env var for LiteLLM
display_name="My Provider", # shown in `nanobot status`
litellm_prefix="myprovider", # auto-prefix: model → myprovider/model
skip_prefixes=("myprovider/",), # don't double-prefix
)Step 2. Add a field to ProvidersConfig in nanobot/config/schema.py:
class ProvidersConfig(BaseModel):
...
myprovider: ProviderConfig = ProviderConfig()That's it! Environment variables, model prefixing, config matching, and nanobot status display will all work automatically.
Common ProviderSpec options:
| Field | Description | Example |
|---|---|---|
litellm_prefix |
Auto-prefix model names for LiteLLM | "dashscope" → dashscope/qwen-max |
skip_prefixes |
Don't prefix if model already starts with these | ("dashscope/", "openrouter/") |
env_extras |
Additional env vars to set | (("ZHIPUAI_API_KEY", "{api_key}"),) |
model_overrides |
Per-model parameter overrides | (("kimi-k2.5", {"temperature": 1.0}),) |
is_gateway |
Can route any model (like OpenRouter) | True |
detect_by_key_prefix |
Detect gateway by API key prefix | "sk-or-" |
detect_by_base_keyword |
Detect gateway by API base URL | "openrouter" |
strip_model_prefix |
Strip existing prefix before re-prefixing | True (for AiHubMix) |
Tip
The config format is compatible with Claude Desktop / Cursor. You can copy MCP server configs directly from any MCP server's README.
nanobot supports MCP — connect external tool servers and use them as native agent tools.
Add MCP servers to your config.json:
{
"tools": {
"mcpServers": {
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/dir"]
}
}
}
}Two transport modes are supported:
| Mode | Config | Example |
|---|---|---|
| Stdio | command + args |
Local process via npx / uvx |
| HTTP | url |
Remote endpoint (https://mcp.example.com/sse) |
MCP tools are automatically discovered and registered on startup. The LLM can use them alongside built-in tools — no extra configuration needed.
Tip
For production deployments, set "restrictToWorkspace": true in your config to sandbox the agent.
| Option | Default | Description |
|---|---|---|
tools.restrictToWorkspace |
false |
When true, restricts all agent tools (shell, file read/write/edit, list) to the workspace directory. Prevents path traversal and out-of-scope access. |
channels.*.allowFrom |
[] (allow all) |
Whitelist of user IDs. Empty = allow everyone; non-empty = only listed users can interact. |
| Command | Description |
|---|---|
nanobot onboard |
Initialize config & workspace |
nanobot agent -m "..." |
Chat with the agent |
nanobot agent |
Interactive chat mode |
nanobot agent --no-markdown |
Show plain-text replies |
nanobot agent --logs |
Show runtime logs during chat |
nanobot gateway |
Start the gateway |
nanobot status |
Show status |
nanobot provider login openai-codex |
OAuth login for providers |
nanobot channels login |
Link WhatsApp (scan QR) |
nanobot channels status |
Show channel status |
Interactive mode exits: exit, quit, /exit, /quit, :q, or Ctrl+D.
Scheduled Tasks (Cron)
# Add a job
nanobot cron add --name "daily" --message "Good morning!" --cron "0 9 * * *"
nanobot cron add --name "hourly" --message "Check status" --every 3600
# List jobs
nanobot cron list
# Remove a job
nanobot cron remove <job_id>Tip
The -v ~/.nanobot:/root/.nanobot flag mounts your local config directory into the container, so your config and workspace persist across container restarts.
docker compose run --rm nanobot-cli onboard # first-time setup
vim ~/.nanobot/config.json # add API keys
docker compose up -d nanobot-gateway # start gatewaydocker compose run --rm nanobot-cli agent -m "Hello!" # run CLI
docker compose logs -f nanobot-gateway # view logs
docker compose down # stop# Build the image
docker build -t nanobot .
# Initialize config (first time only)
docker run -v ~/.nanobot:/root/.nanobot --rm nanobot onboard
# Edit config on host to add API keys
vim ~/.nanobot/config.json
# Run gateway (connects to enabled channels, e.g. Telegram/Discord/Mochat)
docker run -v ~/.nanobot:/root/.nanobot -p 18790:18790 nanobot gateway
# Or run a single command
docker run -v ~/.nanobot:/root/.nanobot --rm nanobot agent -m "Hello!"
docker run -v ~/.nanobot:/root/.nanobot --rm nanobot statusnanobot/
├── agent/ # 🧠 Core agent logic
│ ├── loop.py # Agent loop (LLM ↔ tool execution)
│ ├── context.py # Prompt builder
│ ├── memory.py # Persistent memory
│ ├── skills.py # Skills loader
│ ├── subagent.py # Background task execution
│ └── tools/ # Built-in tools (incl. spawn)
├── skills/ # 🎯 Bundled skills (github, weather, tmux...)
├── channels/ # 📱 Chat channel integrations
├── bus/ # 🚌 Message routing
├── cron/ # ⏰ Scheduled tasks
├── heartbeat/ # 💓 Proactive wake-up
├── providers/ # 🤖 LLM providers (OpenRouter, etc.)
├── session/ # 💬 Conversation sessions
├── config/ # ⚙️ Configuration
└── cli/ # 🖥️ Commands
PRs welcome! The codebase is intentionally small and readable. 🤗
Roadmap — Pick an item and open a PR!
- Multi-modal — See and hear (images, voice, video)
- Long-term memory — Never forget important context
- Better reasoning — Multi-step planning and reflection
- More integrations — Calendar and more
- Self-improvement — Learn from feedback and mistakes
MetalClaw is a fork of nanobot. Original contributors:
MetalClaw contributors:
MetalClaw seamlessly integrates with n8n, a powerful workflow automation platform.
- Workflow Management: List, get, create, update, and execute workflows
- Automation: Automate complex business processes and tasks
- Visual Editor: Design workflows with n8n's intuitive visual interface
- RESTful API: Full control over n8n via REST API
- Install n8n:
docker run -it --rm \
--name n8n \
-p 5678:5678 \
-v ~/.n8n:/home/node/.n8n \
n8nio/n8n- Configure MetalClaw:
Edit
nanobot/agent/tools/n8n_tool.pyand update the API settings:
self.api_url = "http://localhost:5678" # n8n API URL
self.api_key = "your-n8n-api-key" # n8n API key (optional)- Get API Key (if needed):
# Access n8n settings → API → Create new API key
# Or use n8n without API key (default Docker setup)MetalClaw provides full control over n8n workflows:
| Action | Command | Description |
|---|---|---|
| List | list |
List all workflows (filter by active status) |
| Get | get id=xxx |
Get a specific workflow by ID |
| Create | create @workflow.json |
Create a new workflow from JSON |
| Update | update id=xxx @workflow.json |
Update a workflow by ID |
| Execute | execute id=xxx |
Execute a workflow by ID |
| Delete | delete id=xxx |
Delete a workflow by ID |
Example conversation:
You: List all active n8n workflows
MetalClaw: 🔌 Querying n8n...
MetalClaw: **Found 3 workflows:**
- test_workflow (active)
- main_workflow (active)
- sample_workflow (inactive)
You: Execute workflow 8b9c5a3e1f2d4c6b8a0d1e2f3a4b5c6d
MetalClaw: 🔌 Executing workflow...
MetalClaw: ✅ Workflow executed successfully!
MetalClaw seamlessly integrates with Agent Zero, a powerful multi-agent AI automation platform for complex tasks.
- Multi-Agent Collaboration: Multiple AI agents working together on complex tasks
- Advanced Capabilities: Web browsing, code execution, file operations, and more
- Task Automation: Schedule and automate complex workflows
- Knowledge Management: Built-in knowledge base and memory system
- Install Agent Zero:
git clone https://github.com/agent0ai/agent-zero.git
cd agent-zero
docker compose up -d- Configure Agent Zero:
# Access Agent Zero container
docker exec -it agent-zero bash
# Set WEB_UI_HOST to expose API externally
echo "WEB_UI_HOST=0.0.0.0" >> /root/.openclaw/workspace/agent-zero/usr/.env
# Restart Agent Zero
exit
docker restart agent-zero
# Start Agent Zero server (in container)
docker exec agent-zero bash -c 'cd /root/.openclaw/workspace/agent-zero && source venv/bin/activate && nohup python run_ui.py > /tmp/agent-zero-ui.log 2>&1 &'- Verify Connection:
# Check health
curl http://localhost:5000/health
# Should return: {"status":"ok"}- Get API Key:
# Get runtime ID
RUNTIME_ID=$(curl -s -H "Origin: http://localhost:5000" http://localhost:5000/csrf_token | python3 -c "import sys, json; print(json.load(sys.stdin)[\"runtime_id\"])")
# Generate API key (SHA256 hash of runtime_id::username:password)
# Default username/password is empty, so just use runtime_id::
echo -n "${RUNTIME_ID}::" | sha256sum | cut -d' ' -f1 | xxd -r -p | base64 | head -c 16- Update MetalClaw:
Edit
nanobot/agent/tools/agent_zero_tool.pyand updateself.api_keywith the generated key.
MetalClaw automatically connects to Agent Zero when you use the exec_agent_zero tool. Simply send a message and MetalClaw will:
- Get CSRF token from Agent Zero
- Send your message with API key authentication
- Receive Agent Zero's response
- Forward it back to you
Example conversation:
You: Use Agent Zero to search for AI trends
MetalClaw: 🔌 Connecting to Agent Zero...
MetalClaw: **Here are the latest AI trends:**
- Multi-agent systems
- Code generation tools
- Voice AI assistants
Recommended by JUN
"I recommend using MetalClaw + Agent Zero together for the best AI automation experience. MetalClaw provides the interface, while Agent Zero handles complex multi-agent tasks. This combination is powerful and flexible!"
MetalClaw seamlessly integrates with n8n, a powerful workflow automation platform that allows you to connect and automate various services and applications.
- Workflow Management: Create and manage complex workflows with a visual editor
- Automation: Automate repetitive tasks and integrate with 400+ services
- REST API: Full control over workflows via REST API
- Self-Hosted: Run n8n on your own server for complete control
- Install n8n:
# Using Docker
docker run -it --rm \
--name n8n \
-p 5678:5678 \
-v ~/.n8n:/home/node/.n8n \
n8nio/n8n- Configure n8n:
- Access n8n at
http://localhost:5678 - Create your admin account
- Create workflows as needed
- Update MetalClaw:
Edit
nanobot/agent/tools/n8n_tool.pyand configure:
self.api_url = "http://localhost:5678/api/v1"
self.api_key = "your-n8n-api-key" # Get from n8n settingsMetalClaw automatically connects to n8n when you use the exec_n8n tool. Available actions:
| Action | Description |
|---|---|
list |
List all workflows (filter by active status) |
get |
Get a specific workflow by ID |
create |
Create a new workflow from JSON |
update |
Update a workflow by ID |
execute |
Execute a workflow by ID |
delete |
Delete a workflow by ID |
Example conversation:
You: List all active n8n workflows
MetalClaw: 🔌 Connecting to n8n...
MetalClaw: Here are your active workflows:
- test_workflow (ID: 8b9c5a3e1f2d4c6b8a0d1e2f3a4b5c6d)
- main_workflow (ID: 7f8e9d0c1b2a3f4e5d6c7b8a9f0e1d2c)
Thanks for visiting ✨ MetalClaw!
MetalClaw is for educational, research, and technical exchange purposes only





