Skip to content

memohai/Memoh

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

709 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Memoh

Memoh

Self hosted, always-on AI agent platform run in containers.

📌 Introduction to Memoh - The Case for an Always-On, Containerized Home Agent


Memoh is an always-on, containerized AI agent system. Create multiple AI bots, each running in its own isolated container with persistent memory, and interact with them across Telegram, Discord, Lark (Feishu), QQ, Matrix, WeCom, WeChat, Email, or the built-in Web UI. Bots can execute commands, edit files, browse the web, call external tools via MCP, and remember everything — like giving each bot its own computer and brain.

Quick Start

One-click install (requires Docker):

curl -fsSL https://memoh.sh | sudo sh

Silent install with all defaults: curl -fsSL ... | sudo sh -s -- -y

Or manually:

git clone --depth 1 https://github.com/memohai/Memoh.git
cd Memoh
cp conf/app.docker.toml config.toml
# Edit config.toml
sudo docker compose up -d

Install a specific version:

curl -fsSL https://memoh.sh | sudo MEMOH_VERSION=v0.6.0 sh

Use CN mirror for slow image pulls:

curl -fsSL https://memoh.sh | sudo USE_CN_MIRROR=true sh

On macOS or if your user is in the docker group, sudo is not required.

Visit http://localhost:8082 after startup. Default login: admin / admin123

See DEPLOYMENT.md for custom configuration and production setup.

Why Memoh?

Memoh is built for always-on continuity — an AI that stays online, and a memory that stays yours.

  • Lightweight & Fast: Built with Go as home/studio infrastructure, runs efficiently on edge devices.
  • Containerized by default: Each bot gets an isolated container with its own filesystem, network, and tools.
  • Hybrid split: Cloud inference for frontier model capability, local-first memory and indexing for privacy.
  • Multi-user first: Explicit sharing and privacy boundaries across users and bots.
  • Full graphical configuration: Configure bots, channels, MCP, skills, and all settings through a modern web UI — no coding required.

Features

Core

  • 🤖 Multi-Bot & Multi-User: Create multiple bots that chat privately, in groups, or with each other. Bots distinguish individual users in group chats, remember each person's context, and support cross-platform identity binding.
  • 📦 Containerized: Each bot runs in its own isolated containerd container with a dedicated filesystem and network — like having its own computer. Supports snapshots, data export/import, and versioning.
  • 🧠 Memory Engineering: LLM-driven fact extraction, hybrid retrieval (dense + sparse + BM25), 24-hour context loading, memory compaction & rebuild. Pluggable backends: Built-in (off / sparse / dense), Mem0, OpenViking.
  • 💬 9 Channels: Telegram, Discord, Lark (Feishu), QQ, Matrix, WeCom, WeChat, Email (Mailgun / SMTP / Gmail OAuth), and built-in Web UI — with unified streaming, rich text, and attachments.

Agent Capabilities

  • 🔧 MCP (Model Context Protocol): Full MCP support (HTTP / SSE / Stdio / OAuth). Connect external tool servers for extensibility; each bot manages its own independent MCP connections.
  • 🌐 Browser Automation: Headless Chromium/Firefox via Playwright — navigate, click, fill forms, screenshot, read accessibility trees, manage tabs.
  • 🎭 Skills & Subagents: Define bot personality via modular skill files; delegate complex tasks to sub-agents with independent context.
  • Automation: Cron-based scheduled tasks and periodic heartbeat for autonomous bot activity.

Management

  • 🖥️ Web UI: Modern dashboard (Vue 3 + Tailwind CSS) — streaming chat, tool call visualization, file manager, visual configuration for all settings. Dark/light theme, i18n.
  • 🔐 Access Control: Priority-based ACL rules with allow/deny effects, scoped by channel identity, channel type, or conversation.
  • 🧪 Multi-Model: Any OpenAI-compatible, Anthropic, or Google provider. Per-bot model assignment, provider OAuth, and automatic model import.
  • 🚀 One-Click Deploy: Docker Compose with automatic migration, containerd setup, and CNI networking.

Memory System

Memoh's memory system is built around Memory Providers — pluggable backends that control how a bot stores, retrieves, and manages long-term memory.

Provider Description
Built-in Self-hosted, ships with Memoh. Three modes: Off (file-based, no vector search), Sparse (neural sparse vectors via local model, no API cost), Dense (embedding-based semantic search via Qdrant).
Mem0 SaaS memory via the Mem0 API.
OpenViking Self-hosted or SaaS memory with its own API.

Each bot binds one provider. During chat, the bot automatically extracts key facts from every conversation turn and stores them as structured memories. On each new message, the most relevant memories are retrieved via hybrid search and injected into the bot's context — giving it personalized, long-term recall across conversations.

Additional capabilities include memory compaction (merge redundant entries), rebuild, manual creation/editing, and vector manifold visualization (Top-K distribution & CDF curves). See the documentation for setup details.

Gallery

Gallery 1 Gallery 2 Gallery 3
Chat Container Providers
Gallery 4 Gallery 5 Gallery 6
File Manager Scheduled Tasks Token Usage

Architecture

flowchart TB
    subgraph Clients [" Clients "]
        direction LR
        CH["Channels<br/>Telegram · Discord · Feishu · QQ<br/>Matrix · WeCom · WeChat · Email"]
        WEB["Web UI (Vue 3 :8082)"]
    end

    CH & WEB --> API

    subgraph Server [" Server · Go :8080 "]
        API["REST API & Channel Adapters"]

        subgraph Agent [" In-process AI Agent "]
            TWILIGHT["Twilight AI SDK<br/>OpenAI · Anthropic · Google"]
            CONV["Conversation Flow<br/>Streaming · Sential · Loop Detection"]
        end

        subgraph ToolProviders [" Tool Providers "]
            direction LR
            T_CORE["Memory · Web Search<br/>Schedule · Contacts · Inbox"]
            T_EXT["Container · Email · Browser<br/>Subagent · Skill · TTS<br/>MCP Federation"]
        end

        API --> Agent --> ToolProviders
    end

    PG[("PostgreSQL")]
    QD[("Qdrant")]
    BROWSER["Browser Gateway<br/>(Playwright :8083)"]

    subgraph Workspace [" Workspace Containers · containerd "]
        direction LR
        BA["Bot A"] ~~~ BB["Bot B"] ~~~ BC["Bot C"]
    end

    Server --- PG
    Server --- QD
    ToolProviders -.-> BROWSER
    ToolProviders -- "gRPC Bridge over UDS" --> Workspace
Loading

Sub-projects Born for This Project

  • Twilight AI — A lightweight, idiomatic AI SDK for Go — inspired by Vercel AI SDK. Provider-agnostic (OpenAI, Anthropic, Google), with first-class streaming, tool calling, MCP support, and embeddings.

Roadmap

Please refer to the Roadmap for more details.

Development

Refer to CONTRIBUTING.md for development setup.

Star History

Star History Chart

Contributors

LICENSE: AGPLv3

Copyright (C) 2026 Memoh. All rights reserved.

About

✨ Self hosted, always-on AI agent platform run in containers. Create multiple bots with long memory, and connect them to Telegram, Discord, Feishu(Lark), Matrix, etc (like OpenClaw).

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Packages

 
 
 

Contributors