Skip to content

AliceLJY/telegram-ai-bridge

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

53 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

telegram-ai-bridge

Remote Control Your Local AI Agent via Telegram

Leave Claude Code running on your desktop. Continue the conversation from your phone.

A self-hosted Telegram bridge that connects one bot to one local AI CLI — with session persistence, resumable workflows, and owner-only access.

MIT License Bun Telegram

English | 简体中文


Why This Exists

Most "Telegram + AI" projects are chat wrappers. This one is a remote control for your local coding agent.

  • Targets local coding agents (Claude Code, Codex), not generic chatbots
  • Sessions and credentials stay on your machine
  • Supports real resumable agent workflows
  • Owner-only access by default

Core product rule: One bot = one backend = one mental model.

Supported backends:

Backend SDK Status
claude Claude Agent SDK Recommended
codex Codex SDK Recommended
gemini Gemini Code Assist API Experimental compatibility

This project is intentionally narrower than multi-channel AI workspace tools. It optimizes for the shortest path from phone → Telegram → local AI CLI.


Quick Start

git clone https://github.com/AliceLJY/telegram-ai-bridge.git
cd telegram-ai-bridge
bun install
bun run bootstrap --backend claude
bun run setup --backend claude
bun run check --backend claude
bun run start --backend claude

Recommended Deployment

Run separate bots for separate agents:

  • @your-claude-bot → Claude only
  • @your-codex-bot → Codex only
  • @your-gemini-bot → Gemini only (if you explicitly need it)

What You Get

Feature Description
One-command startup bun run start --backend <name>
Setup wizard bun run setup — interactive config generation
Preflight check bun run check --backend <name> — validates config and CLI state
Session persistence SQLite-backed, sticky sessions with resume/preview
Task tracking Persistent approval and execution history
Owner-only access Only your Telegram ID can use the bot
Dual executor direct (in-process) or local-agent (JSONL stdio subprocess)
Docker support Same runtime model, credential volumes mounted in
macOS LaunchAgent Auto-generated plist for background deployment
Group shared context Multiple bots in one group see each other's replies (SQLite / JSON / Redis)
CI Bun tests wired into GitHub Actions

Telegram Commands

Sessions are sticky: messages continue the current session until you explicitly change it.

Command Description
/new Start a new session
/sessions List recent sessions
/peek <id> Read-only preview a session
/resume <id> Rebind current chat to an owned session
/model Pick a model for the current bot
/status Show backend, model, cwd, and session
/tasks Show recent task history
/verbose 0|1|2 Change progress verbosity
/relay <target> <msg> Forward a message to another bot and return its reply
/a2a status Show A2A bus status, peer health, and loop guard stats

Multi-Bot Group Collaboration

Telegram bots cannot see each other's messages — this is a platform-level limitation. When you put Claude and Codex in the same group, neither can read the other's replies.

This project works around it with a pluggable shared context store. Each bot writes its reply after responding. When another bot is @mentioned, it reads the shared context and includes the other bot's replies in its prompt.

You: @claude Review this code
CC:  [reviews code, writes reply to shared store]

You: @codex Do you agree with CC's review?
Codex: [reads CC's reply from shared store, gives opinion]

No copy-pasting needed. Built-in limits (30 messages / 3000 tokens / 20-minute TTL) prevent context bloat.

Storage Backend Comparison

Backend Dependencies Concurrency Best For
sqlite (default) None (built-in) WAL mode, single-writer Single bot, low concurrency
json None (built-in) Atomic write (tmp+rename) Zero-dependency deployment
redis ioredis Native concurrency + TTL Multi-bot, Docker environment

Set sharedContextBackend in config.json:

{
  "shared": {
    "sharedContextBackend": "redis",
    "redisUrl": "redis://localhost:6379"
  }
}

Note: Bots only respond when explicitly @mentioned or replied to. They don't auto-reply to each other.

A2A: Agent-to-Agent Communication

Beyond passive shared context, A2A lets bots actively respond to each other in group chats. When one bot replies to a user, the A2A bus broadcasts the response to sibling bots. Each sibling independently decides whether to chime in.

You:    @claude What's the best way to handle retries?
Claude: [responds with retry pattern advice]
         ↓ A2A broadcast
Codex:  [reads Claude's reply, adds: "I'd also suggest exponential backoff..."]

Built-in safety:

  • Loop guard: Max 2 generations of bot-to-bot replies per conversation turn
  • Cooldown: 60s minimum between A2A responses per bot
  • Circuit breaker: Auto-disables unreachable peers after 3 failures
  • Rate limit: Max 3 A2A responses per 5-minute window

Important: A2A only works in group chats. Private/DM conversations are never broadcast — this prevents cross-bot message leaking between separate DM windows.

Enable in config.json:

{
  "shared": {
    "a2aEnabled": true,
    "a2aPorts": { "claude": 18810, "codex": 18811 }
  }
}

Each bot instance listens on its assigned port. Peers are auto-discovered from a2aPorts (excluding self).

/relay — Cross-Bot Point-to-Point Messaging

While A2A broadcast is group-only, /relay works everywhere — including DMs. It sends a message to another bot's AI backend and returns the response directly.

/relay codex What do you think of this approach?

Aliases for less typing: cc=claude, cx=codex, gm=gemini.

Reply-to forwarding: Long-press a bot's reply and respond with /relay <target> [instruction] — the replied-to message is automatically included in the relay prompt. No copy-pasting needed.

Claude:  [reviews your code]
You:     (reply to Claude's message) /relay cx Do you agree with this review?
Codex:   [sees Claude's full review + your instruction, gives opinion]

This is ideal for fact-checking and cross-review workflows.


Architecture

Telegram bot
  → start.js
  → config.json
  → bridge.js
  → executor (direct | local-agent)
  → backend adapter (claude | codex | gemini)
  → local credentials and session files

Each bot instance keeps its own Telegram token, SQLite DBs, credential directory, and model settings.


Configuration

bun run bootstrap --backend claude generates a starter config.json. Or copy config.example.json.

{
  "shared": {
    "ownerTelegramId": "123456789",
    "cwd": "/Users/you",
    "httpProxy": "",
    "defaultVerboseLevel": 1,
    "executor": "direct",
    "tasksDb": "tasks.db",
    "sharedContextBackend": "sqlite",
    "sharedContextDb": "shared-context.db",
    "redisUrl": ""
  },
  "backends": {
    "claude": {
      "enabled": true,
      "telegramBotToken": "...",
      "sessionsDb": "sessions.db",
      "model": "claude-sonnet-4-6",
      "permissionMode": "default"
    },
    "codex": {
      "enabled": true,
      "telegramBotToken": "...",
      "sessionsDb": "sessions-codex.db",
      "model": ""
    },
    "gemini": {
      "enabled": false,
      "telegramBotToken": "",
      "sessionsDb": "sessions-gemini.db",
      "model": "gemini-2.5-pro",
      "oauthClientId": "",
      "oauthClientSecret": "",
      "googleCloudProject": ""
    }
  }
}

config.json is gitignored. shared.sessionTimeoutMs controls per-request timeout only, not idle session expiry.

Inspect resolved config: bun run config --backend claude (secrets redacted).

Backend Notes

Claude:

  • Requires local login state under ~/.claude/
  • Supports permissionMode: default or bypassPermissions

Codex:

  • Requires local login state under ~/.codex/
  • Optional model override; empty string uses Codex defaults

Gemini:

  • Experimental compatibility backend, not primary
  • Requires ~/.gemini/oauth_creds.json, oauthClientId, oauthClientSecret
  • Uses Gemini Code Assist API mode, not full CLI terminal control
  • Recommended only when you intentionally need Gemini support
macOS LaunchAgent

Generate and install:

./scripts/install-launch-agent.sh --backend claude --install
./scripts/install-launch-agent.sh --backend codex --install

The wrapper runs bun run check before bun run start, so bad config fails fast.

Default labels: com.telegram-ai-bridge, com.telegram-ai-bridge-codex, com.telegram-ai-bridge-gemini.

launchctl print gui/$(id -u)/com.telegram-ai-bridge
launchctl kickstart -k gui/$(id -u)/com.telegram-ai-bridge
tail -f bridge.log

If you see 409 Conflict, another process is polling the same bot token.

Docker
docker build -t telegram-ai-bridge .

docker run -d \
  --name tg-ai-bridge-claude \
  -v $(pwd)/config.json:/app/config.json:ro \
  -v ~/.claude:/root/.claude \
  telegram-ai-bridge --backend claude

Swap credential mount and --backend for other backends. See docker-compose.example.yml for a Compose starter.

Project Structure
  • start.js — CLI entry for start, bootstrap, check, setup, config
  • config.js — Config loader and setup wizard
  • bridge.js — Telegram bot runtime
  • sessions.js — SQLite session persistence
  • shared-context.js — Cross-bot shared context entry point
  • shared-context/ — Pluggable backends (SQLite / JSON / Redis)
  • a2a/ — Agent-to-agent communication bus, loop guard, peer health
  • adapters/ — Backend integrations
  • launchd/ — LaunchAgent template for macOS
  • scripts/ — Install wrapper and runtime launcher
  • docker-compose.example.yml — Compose starter
Execution Modes
  • direct — runs the backend adapter in-process (default)
  • local-agent — communicates with a local agent subprocess over JSONL stdio

Set in config.json at shared.executor, or override with BRIDGE_EXECUTOR.


How It Fits Together

Three ways to make AI agents talk to each other — different protocols, different scenarios:

Layer Protocol How Scenario
Terminal MCP Built-in codex mcp-server + claude mcp serve, zero code CC ↔ Codex direct calls in your terminal
Telegram Group Custom A2A This project's A2A bus, auto-broadcast Multiple bots in one group, chiming in
Telegram DM Custom A2A This project's /relay command Explicit cross-bot forwarding from phone
Server Google A2A v0.3.0 openclaw-a2a-gateway OpenClaw agents across servers

MCP vs A2A: MCP is a tool-calling protocol (I invoke your capability). A2A is a peer communication protocol (I talk to you as an equal). CC calling Codex via MCP is using Codex as a tool — not two agents chatting.

Terminal: CLI-to-CLI via MCP (No Telegram Needed)

Claude Code and Codex each have a built-in MCP server mode. Register them with each other and they can call each other directly — no bridge, no Telegram, no custom code:

# In Claude Code: register Codex as MCP server
claude mcp add codex -- codex mcp-server

# In Codex: register Claude Code as MCP server (in ~/.codex/config.toml)
[mcp_servers.claude-code]
type = "stdio"
command = "claude"
args = ["mcp", "serve"]

Telegram: This Project

Groups use A2A auto-broadcast. DMs use /relay. See sections above.

Server: openclaw-a2a-gateway

For OpenClaw agents communicating across servers via the Google A2A v0.3.0 standard protocol. A different system entirely — see openclaw-a2a-gateway.

Development

bun test

GitHub Actions runs the same suite on every push and pull request.

License

MIT

About

Turn Telegram into the remote control for your local AI coding agent. Supports Claude, Codex, and Gemini backends.

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors