Skip to content

dannymaaz/memory-mcp

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

25 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Memory MCP

License GitHub Pages Python

Persistent project memory for AI models and coding agents. Memory MCP stores architecture, decisions, tasks, warnings, preferences, and session state in Supabase so OpenCode, Claude Code CLI, Qwen Code, Codex, or any MCP-compatible client can resume work without losing context.

It is designed to behave like a normal MCP server: install it once, expose one mcpServers entry, and reuse the same server across every client that accepts MCP.

Memory MCP logo

🧠 Long-term project memory across AI tools
Documentation Β· OpenCode Β· Claude Code CLI Β· Codex

Table of Contents

Why it matters

AI tools often forget the project state between sessions. Memory MCP fixes that by keeping a durable memory layer for your app, system, and implementation history.

🎯 Search intent
Memory MCP, AI project memory, Supabase persistent context
βš™οΈ Core job
Store architecture, decisions, tasks, warnings, and session state
🌍 Interfaces
OpenCode, Claude Code CLI, Qwen Code, Codex, native MCP clients
Memory layer Benefit
🧠 Decisions Keep technical reasoning consistent across sessions
πŸ—οΈ Architecture Remember how the system is organized and why
βœ… Tasks Resume work from the exact task status
⚠️ Warnings Preserve risks, blockers, and caveats
πŸ” Session state Continue implementation where the last AI client stopped

Quick Start

macOS and Linux:

git clone https://github.com/dannymaaz/memory-mcp.git
cd memory-mcp
python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
pip install -e .
cp .env.example .env
memory-mcp

Windows PowerShell:

git clone https://github.com/dannymaaz/memory-mcp.git
cd memory-mcp
py -m venv .venv
.venv\Scripts\Activate.ps1
pip install -r requirements.txt
pip install -e .
Copy-Item .env.example .env
memory-mcp

Then add your Supabase values to .env, run schema.sql in Supabase SQL Editor, and register mcp.json in your MCP-compatible client.

What goes in .env

For normal MCP usage, you only need:

SUPABASE_URL=https://your-project.supabase.co
SUPABASE_KEY=your-anon-key
OWNER_ID=your-stable-identifier

Optional:

DATABASE_URL=postgresql://user:password@host:6543/postgres
  • SUPABASE_URL: Supabase project URL from Project Settings -> API
  • SUPABASE_KEY: Supabase anon key from Project Settings -> API
  • OWNER_ID: a stable identifier you define yourself; it is not generated by Supabase. Good options are your GitHub username, team slug, or workspace id.
  • DATABASE_URL: only needed if you also want direct Postgres access for admin scripts or manual SQL tooling. The MCP server itself uses SUPABASE_URL and SUPABASE_KEY for normal operation.

Standard MCP pattern

After pip install -e ., clients can launch the server with a normal MCP command entry:

{
  "mcpServers": {
    "memory-mcp": {
      "command": "memory-mcp",
      "env": {
        "SUPABASE_URL": "https://your-project.supabase.co",
        "SUPABASE_KEY": "your-anon-key",
        "OWNER_ID": "your-stable-identifier"
      }
    }
  }
}

If a client accepts a standard MCP JSON with mcpServers, you can usually reuse that same block and only adjust the path, interface, or environment values.

Quick Links

  • Docs site: https://dannymaaz.github.io/memory-mcp/
  • SQL schema: schema.sql
  • MCP config: mcp.json

Client Setup

You can keep this repository exactly in the current folder if that is where you want it to live. For other users who clone it from GitHub, the best pattern is still the same: clone it once, keep a private .env, and connect multiple clients to the same installation.

1. Install from GitHub

macOS and Linux:

git clone https://github.com/dannymaaz/memory-mcp.git
cd memory-mcp
python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
pip install -e .
cp .env.example .env

Windows PowerShell:

git clone https://github.com/dannymaaz/memory-mcp.git
cd memory-mcp
py -m venv .venv
.venv\Scripts\Activate.ps1
pip install -r requirements.txt
pip install -e .
Copy-Item .env.example .env

Then:

  1. Fill in .env with your Supabase values.
  2. Run schema.sql in Supabase SQL Editor.
  3. Keep the repository in a stable folder.
  4. Reuse that same folder for all your IDEs and AI clients.

The installed MCP command is the same on Windows, macOS, and Linux:

memory-mcp

2. Keep one central installation

Do not copy the server into every project. A single installation is enough.

Recommended pattern:

  • one folder for the MCP server,
  • one .env file inside that folder,
  • many repos or apps connected to the same memory backend.

Use any stable folder you control. Do not publish personal local paths in public configs or screenshots.

3. Configure it like any other MCP server

The simplest pattern is to register one command everywhere:

{
  "mcpServers": {
    "memory-mcp": {
      "command": "memory-mcp",
      "env": {
        "SUPABASE_URL": "https://your-project.supabase.co",
        "SUPABASE_KEY": "your-anon-key",
        "OWNER_ID": "your-stable-identifier"
      }
    }
  }
}

That same block works as the base for Antigravity, OpenCode, Claude Code, Codex, and most other MCP-compatible clients.

4. Do I need to start it after every reboot?

Usually no.

If a client is configured to launch memory-mcp, it normally starts the server on demand when the client needs it. In normal use, that means you do not have to manually rerun the server every time you turn on the PC.

You only need to start it yourself when:

  • testing the server directly,
  • debugging outside the client,
  • or using a custom setup that does not automatically spawn MCP servers.

5. Configure each client

OpenCode

Run OpenCode from the repository root or point it to the included mcp.json:

opencode --mcp-config mcp.json

PROJECT_MEMORY_INTERFACE=opencode is optional. Use it only if you want to force a client label.

If OpenCode accepts a direct MCP JSON entry, you can paste the same mcpServers.memory-mcp block there.

Codex

Register the server using mcp.json or the equivalent Codex MCP settings, then run:

codex --config mcp.json

Claude Code CLI

Run Claude Code with the shared MCP config:

claude-code --mcp-config mcp.json

PROJECT_MEMORY_INTERFACE=claude-code is optional. Use it only if you want to force a client label.

Claude Desktop

Edit the Claude Desktop MCP config file and add a local server entry.

Windows path:

%APPDATA%\Claude\claude_desktop_config.json

macOS path:

~/Library/Application Support/Claude/claude_desktop_config.json

Linux path:

Check your local Claude Desktop app data folder

Example config:

{
  "mcpServers": {
    "memory-mcp": {
      "command": "memory-mcp",
      "env": {
        "SUPABASE_URL": "https://your-project.supabase.co",
        "SUPABASE_KEY": "your-anon-key",
        "OWNER_ID": "your-stable-identifier"
      }
    }
  }
}

After saving the file, restart Claude Desktop.

Antigravity

If your Antigravity build supports external MCP servers, register the same server there using the same command and environment values:

memory-mcp

Use the same mcpServers JSON block as the base config and set the interface to native or antigravity in your client flow.

Common Windows path:

%USERPROFILE%\.gemini\antigravity\mcp_config.json

That means Antigravity can detect the server from a normal MCP JSON config file, just like other clients.

Qwen Code

PROJECT_MEMORY_INTERFACE=qwen-code qwen --mcp-config mcp.json

Natural Language Usage

In most MCP-compatible clients, you do not have to manually say which tool to call. If the client exposes Memory MCP tools and tool use is enabled, the model can decide on its own when to call tools like load_unified_context, capture_project_memory, save_cross_interface_decision, update_task_status, or sync_session_state.

Typical natural-language prompts:

  • "Resume this project and tell me where we left off."
  • "Load the stored project memory before continuing the refactor."
  • "Save this architecture decision and mark the current task as in progress."
  • "Check active warnings before we keep coding."
  • "Save everything important from this session in Memory MCP."
  • "If this is a new project, create what you need in Memory MCP and start saving memory automatically."

When the model sees those requests, it can map them to the right MCP tools automatically.

Manual tool calls are still useful when:

  • you are debugging an integration,
  • you want exact control over the payload,
  • or your client does not allow automatic tool use.

If your client disables tool use, the model cannot call MCP tools by itself. In that case, enable MCP/MCP tools in the client or trigger the tool explicitly.

Features

  • 🧩 Automatic project resolution and creation from repository context.
  • πŸ”€ Multi-client continuity for OpenCode, Claude Code CLI, Qwen Code, Codex, and native MCP flows.
  • 🌿 Git-aware memory with repo path, remote, branch, commit, and working tree status.
  • πŸ“¦ Checkpoints, prompt patterns, file memory, and timeline snapshots for faster resume flows.
  • πŸ”Ž Semantic memory search with Supabase embeddings plus lexical fallback.
  • πŸ—‚οΈ Retention policies, JSON/Markdown export, and import support for backup or migration.
  • πŸ” Row Level Security across every persistent table.
  • πŸ§ͺ Pytest coverage for the server and optimizer.
  • 🌐 Public bilingual docs optimized for GitHub and Google search.

What it automates

  • Auto-resolves or auto-creates the project when project_id is omitted.
  • Detects repository context from git metadata when available.
  • Records session summaries and next steps when work stops or switches clients.
  • Stores file-level memory and dependency relationships for important modules.
  • Detects duplicate tasks, conflicting decisions, and missing file dependencies as warnings.
  • Builds a project timeline so you can understand how the work evolved.

Architecture Snapshot

Architecture diagram
Minimal flow
User β†’ AI Client β†’ MCP Server β†’ Supabase
What persists

Architecture
Decisions
Tasks
Warnings
Preferences
Sessions
Session state

Documentation

  • Public docs: docs/index.html
  • SEO sitemap: docs/sitemap.xml
  • GitHub Pages target: https://dannymaaz.github.io/memory-mcp/
  • Locales: docs/locales/en.json and docs/locales/es.json

API Reference

Key tools exposed by the server in src/server.py:

  1. resolve_project β€” auto-detects or auto-creates the current project.
  2. create_project β€” creates a project explicitly with repo/workspace metadata.
  3. list_projects β€” lists projects for the current owner or workspace.
  4. load_unified_context β€” loads optimized durable memory for the current client.
  5. save_cross_interface_decision β€” persists architecture or implementation decisions.
  6. update_task_status β€” creates or updates tasks and flags duplicates.
  7. create_session β€” opens a tracked session with git context.
  8. end_session β€” closes a session and saves a resume-ready summary.
  9. add_warning β€” records warnings manually.
  10. get_active_warnings β€” returns unresolved warnings.
  11. sync_session_state β€” stores in-progress work for handoff between clients.
  12. get_interface_analytics β€” returns interface usage trends.
  13. save_file_memory β€” stores file summaries and dependency edges.
  14. save_checkpoint β€” saves checkpoints for architecture, blockers, and next steps.
  15. save_prompt_pattern β€” stores reusable prompt patterns and response preferences.
  16. search_semantic_memory β€” searches memory semantically or lexically.
  17. get_project_timeline β€” returns a chronological memory timeline.
  18. export_memory_bundle β€” exports memory to JSON or Markdown.
  19. import_memory_bundle β€” imports a memory bundle back into a project.
  20. resume_project β€” returns a ready-to-use summary to continue work.
  21. apply_retention_policy β€” stores retention rules and creates archive summaries.

Automatic Project Memory Workflow

Typical usage now looks like this:

  1. The client launches memory-mcp.
  2. The server inspects the current repository context and resolves or creates a project.
  3. load_unified_context returns decisions, tasks, warnings, checkpoints, file memory, prompts, and timeline data.
  4. During work, sessions, decisions, prompt patterns, file relationships, and warnings are updated.
  5. When work ends, the server can save session state, create a checkpoint, and return a resume-ready next step.

Prompt Recipes By Client

Antigravity

  • "Use Memory MCP for this project. If you detect important decisions, blockers, tasks, or next steps, save them automatically while we work."
  • "Before we finish, save everything important from this session in Memory MCP and leave me the next recommended step."

OpenCode

  • "Use Memory MCP while we refactor. Document important files, dependencies, architectural decisions, and task progress automatically."
  • "If this repository is new to Memory MCP, create the project automatically and start storing context as we go."

Claude Code CLI

  • "Retoma este proyecto con Memory MCP y dame un resumen de lo hecho, lo que falta y el siguiente paso recomendado."
  • "When we stop, capture the full session in Memory MCP, including decisions, tasks, warnings, prompts, and a checkpoint summary."

Codex

  • "Use Memory MCP while implementing this task. Keep warnings, task status, and important file memory synchronized as you code."
  • "Before handing control back, save everything important from this coding session in Memory MCP and tell me the next safe step."

Qwen Code

  • "Use Memory MCP during this refactor. Save architectural decisions, file relationships, and task progress automatically."
  • "If this repository is new, create the project in Memory MCP and start capturing context as we modify the codebase."

Examples

  • examples/antigravity/README.md
  • examples/claude-desktop/README.md
  • examples/opencode/README.md
  • examples/claude-code/README.md
  • examples/qwen-plugin/README.md
  • examples/codex-plugin/README.md
  • examples/native-chat/README.md

SEO Highlights

  • Uses Memory MCP, AI project memory, and Supabase persistent context in high-signal sections.
  • Keeps core keywords near the top for GitHub search and repository previews.
  • Ships Open Graph, Twitter Card, JSON-LD, canonical URL, hreflang, and sitemap for Google indexing.
  • Includes bilingual docs and MCP-oriented examples for broader discoverability.

FAQ

What is OWNER_ID?

OWNER_ID is a stable identifier you choose for yourself or your team. It is not created by Supabase. Good values include your GitHub username, a company slug, or a workspace id.

Do I need DATABASE_URL?

No, not for normal MCP usage. SUPABASE_URL and SUPABASE_KEY are enough for the server. DATABASE_URL is only useful if you also want direct Postgres access for SQL scripts or admin tooling.

Do I need project_id every time?

No. Memory MCP now tries to resolve the project automatically from repository context and can create it when missing.

Does it work with semantic search immediately?

Yes, with a fallback. The server can always do lexical search. If you also store embeddings in Supabase, search_semantic_memory can rank results semantically.

Do I have to start the server manually after every reboot?

Usually no. MCP-compatible clients normally spawn memory-mcp on demand once they are configured with the command.

Can I just tell the model to save everything?

Yes. Memory MCP now includes capture_project_memory, a high-level tool designed for prompts like:

  • "Save everything important from this session in Memory MCP."
  • "Store all of this in your memory: decisions, tasks, blockers, next steps, and important files."

When the model uses that tool well, it can persist multiple artifacts in one call: decisions, tasks, warnings, file memory, prompt patterns, session state, and a checkpoint summary.

What if I open multiple projects in an IDE?

Memory MCP tries to resolve the active project from repository context. In clients that expose the current workspace or repository path, it can auto-detect which project is active and create storage automatically if it is new.

If your client does not expose the correct repo path, the model can still pass a repo_path explicitly to resolve_project or capture_project_memory.

Screenshots

Architecture preview
Architecture preview
Logo preview
Brand mark

Support the Project

If Memory MCP helps your workflow, you can support development here:

PayPal Ko-fi

Contributing

See CONTRIBUTING.md for setup, style, PR process, and issue reporting.

Author

Community

  • Open a GitHub issue for bugs, ideas, or integration notes.
  • Use the docs site to onboard collaborators quickly.
  • Extend the schema and examples as your AI workflows grow.

Search Keywords

Memory MCP, AI project memory, Supabase persistent context, AI agent memory, OpenCode memory, Claude Code CLI memory, Qwen Code memory, Codex memory, multi-interface AI, context optimization, Danny Maaz.

About

Persistent AI project memory MCP server with Supabase for OpenCode, Antigravity, Claude Code, and Codex

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors