Self hosted, always-on AI agent platform run in containers.
📌 Introduction to Memoh - The Case for an Always-On, Containerized Home Agent
Memoh is an always-on, containerized AI agent system. Create multiple AI bots, each running in its own isolated container with persistent memory, and interact with them across Telegram, Discord, Lark (Feishu), QQ, Matrix, WeCom, WeChat, Email, or the built-in Web UI. Bots can execute commands, edit files, browse the web, call external tools via MCP, and remember everything — like giving each bot its own computer and brain.
One-click install (requires Docker):
curl -fsSL https://memoh.sh | sudo shSilent install with all defaults: curl -fsSL ... | sudo sh -s -- -y
Or manually:
git clone --depth 1 https://github.com/memohai/Memoh.git
cd Memoh
cp conf/app.docker.toml config.toml
# Edit config.toml
sudo docker compose up -dInstall a specific version:
curl -fsSL https://memoh.sh | sudo MEMOH_VERSION=v0.6.0 shUse CN mirror for slow image pulls:
curl -fsSL https://memoh.sh | sudo USE_CN_MIRROR=true shOn macOS or if your user is in the
dockergroup,sudois not required.
Visit http://localhost:8082 after startup. Default login: admin / admin123
See DEPLOYMENT.md for custom configuration and production setup.
Memoh is built for always-on continuity — an AI that stays online, and a memory that stays yours.
- Lightweight & Fast: Built with Go as home/studio infrastructure, runs efficiently on edge devices.
- Containerized by default: Each bot gets an isolated container with its own filesystem, network, and tools.
- Hybrid split: Cloud inference for frontier model capability, local-first memory and indexing for privacy.
- Multi-user first: Explicit sharing and privacy boundaries across users and bots.
- Full graphical configuration: Configure bots, channels, MCP, skills, and all settings through a modern web UI — no coding required.
- 🤖 Multi-Bot & Multi-User: Create multiple bots that chat privately, in groups, or with each other. Bots distinguish individual users in group chats, remember each person's context, and support cross-platform identity binding.
- 📦 Containerized: Each bot runs in its own isolated containerd container with a dedicated filesystem and network — like having its own computer. Supports snapshots, data export/import, and versioning.
- 🧠 Memory Engineering: LLM-driven fact extraction, hybrid retrieval (dense + sparse + BM25), 24-hour context loading, memory compaction & rebuild. Pluggable backends: Built-in (off / sparse / dense), Mem0, OpenViking.
- 💬 9 Channels: Telegram, Discord, Lark (Feishu), QQ, Matrix, WeCom, WeChat, Email (Mailgun / SMTP / Gmail OAuth), and built-in Web UI — with unified streaming, rich text, and attachments.
- 🔧 MCP (Model Context Protocol): Full MCP support (HTTP / SSE / Stdio / OAuth). Connect external tool servers for extensibility; each bot manages its own independent MCP connections.
- 🌐 Browser Automation: Headless Chromium/Firefox via Playwright — navigate, click, fill forms, screenshot, read accessibility trees, manage tabs.
- 🎭 Skills & Subagents: Define bot personality via modular skill files; delegate complex tasks to sub-agents with independent context.
- ⏰ Automation: Cron-based scheduled tasks and periodic heartbeat for autonomous bot activity.
- 🖥️ Web UI: Modern dashboard (Vue 3 + Tailwind CSS) — streaming chat, tool call visualization, file manager, visual configuration for all settings. Dark/light theme, i18n.
- 🔐 Access Control: Priority-based ACL rules with allow/deny effects, scoped by channel identity, channel type, or conversation.
- 🧪 Multi-Model: Any OpenAI-compatible, Anthropic, or Google provider. Per-bot model assignment, provider OAuth, and automatic model import.
- 🚀 One-Click Deploy: Docker Compose with automatic migration, containerd setup, and CNI networking.
Memoh's memory system is built around Memory Providers — pluggable backends that control how a bot stores, retrieves, and manages long-term memory.
| Provider | Description |
|---|---|
| Built-in | Self-hosted, ships with Memoh. Three modes: Off (file-based, no vector search), Sparse (neural sparse vectors via local model, no API cost), Dense (embedding-based semantic search via Qdrant). |
| Mem0 | SaaS memory via the Mem0 API. |
| OpenViking | Self-hosted or SaaS memory with its own API. |
Each bot binds one provider. During chat, the bot automatically extracts key facts from every conversation turn and stores them as structured memories. On each new message, the most relevant memories are retrieved via hybrid search and injected into the bot's context — giving it personalized, long-term recall across conversations.
Additional capabilities include memory compaction (merge redundant entries), rebuild, manual creation/editing, and vector manifold visualization (Top-K distribution & CDF curves). See the documentation for setup details.
![]() |
![]() |
![]() |
| Chat | Container | Providers |
![]() |
![]() |
![]() |
| File Manager | Scheduled Tasks | Token Usage |
flowchart TB
subgraph Clients [" Clients "]
direction LR
CH["Channels<br/>Telegram · Discord · Feishu · QQ<br/>Matrix · WeCom · WeChat · Email"]
WEB["Web UI (Vue 3 :8082)"]
end
CH & WEB --> API
subgraph Server [" Server · Go :8080 "]
API["REST API & Channel Adapters"]
subgraph Agent [" In-process AI Agent "]
TWILIGHT["Twilight AI SDK<br/>OpenAI · Anthropic · Google"]
CONV["Conversation Flow<br/>Streaming · Sential · Loop Detection"]
end
subgraph ToolProviders [" Tool Providers "]
direction LR
T_CORE["Memory · Web Search<br/>Schedule · Contacts · Inbox"]
T_EXT["Container · Email · Browser<br/>Subagent · Skill · TTS<br/>MCP Federation"]
end
API --> Agent --> ToolProviders
end
PG[("PostgreSQL")]
QD[("Qdrant")]
BROWSER["Browser Gateway<br/>(Playwright :8083)"]
subgraph Workspace [" Workspace Containers · containerd "]
direction LR
BA["Bot A"] ~~~ BB["Bot B"] ~~~ BC["Bot C"]
end
Server --- PG
Server --- QD
ToolProviders -.-> BROWSER
ToolProviders -- "gRPC Bridge over UDS" --> Workspace
- Twilight AI — A lightweight, idiomatic AI SDK for Go — inspired by Vercel AI SDK. Provider-agnostic (OpenAI, Anthropic, Google), with first-class streaming, tool calling, MCP support, and embeddings.
Please refer to the Roadmap for more details.
Refer to CONTRIBUTING.md for development setup.
LICENSE: AGPLv3
Copyright (C) 2026 Memoh. All rights reserved.





