Persistent project memory for AI models and coding agents. Memory MCP stores architecture, decisions, tasks, warnings, preferences, and session state in Supabase so OpenCode, Claude Code CLI, Qwen Code, Codex, or any MCP-compatible client can resume work without losing context.
It is designed to behave like a normal MCP server: install it once, expose one mcpServers entry, and reuse the same server across every client that accepts MCP.
π§ Long-term project memory across AI tools
Documentation
Β·
OpenCode
Β·
Claude Code CLI
Β·
Codex
- Why it matters
- Quick Start
- Client Setup
- Natural Language Usage
- Features
- Architecture Snapshot
- Documentation
- API Reference
- Examples
- Screenshots
- Support the Project
- Contributing
- Author
AI tools often forget the project state between sessions. Memory MCP fixes that by keeping a durable memory layer for your app, system, and implementation history.
| π― Search intent Memory MCP, AI project memory, Supabase persistent context |
βοΈ Core job Store architecture, decisions, tasks, warnings, and session state |
π Interfaces OpenCode, Claude Code CLI, Qwen Code, Codex, native MCP clients |
| Memory layer | Benefit |
|---|---|
| π§ Decisions | Keep technical reasoning consistent across sessions |
| ποΈ Architecture | Remember how the system is organized and why |
| β Tasks | Resume work from the exact task status |
| Preserve risks, blockers, and caveats | |
| π Session state | Continue implementation where the last AI client stopped |
macOS and Linux:
git clone https://github.com/dannymaaz/memory-mcp.git
cd memory-mcp
python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
pip install -e .
cp .env.example .env
memory-mcpWindows PowerShell:
git clone https://github.com/dannymaaz/memory-mcp.git
cd memory-mcp
py -m venv .venv
.venv\Scripts\Activate.ps1
pip install -r requirements.txt
pip install -e .
Copy-Item .env.example .env
memory-mcpThen add your Supabase values to .env, run schema.sql in Supabase SQL Editor, and register mcp.json in your MCP-compatible client.
For normal MCP usage, you only need:
SUPABASE_URL=https://your-project.supabase.co
SUPABASE_KEY=your-anon-key
OWNER_ID=your-stable-identifierOptional:
DATABASE_URL=postgresql://user:password@host:6543/postgresSUPABASE_URL: Supabase project URL fromProject Settings -> APISUPABASE_KEY: Supabase anon key fromProject Settings -> APIOWNER_ID: a stable identifier you define yourself; it is not generated by Supabase. Good options are your GitHub username, team slug, or workspace id.DATABASE_URL: only needed if you also want direct Postgres access for admin scripts or manual SQL tooling. The MCP server itself usesSUPABASE_URLandSUPABASE_KEYfor normal operation.
After pip install -e ., clients can launch the server with a normal MCP command entry:
{
"mcpServers": {
"memory-mcp": {
"command": "memory-mcp",
"env": {
"SUPABASE_URL": "https://your-project.supabase.co",
"SUPABASE_KEY": "your-anon-key",
"OWNER_ID": "your-stable-identifier"
}
}
}
}If a client accepts a standard MCP JSON with mcpServers, you can usually reuse that same block and only adjust the path, interface, or environment values.
- Docs site:
https://dannymaaz.github.io/memory-mcp/ - SQL schema:
schema.sql - MCP config:
mcp.json
You can keep this repository exactly in the current folder if that is where you want it to live.
For other users who clone it from GitHub, the best pattern is still the same: clone it once, keep a private .env, and connect multiple clients to the same installation.
macOS and Linux:
git clone https://github.com/dannymaaz/memory-mcp.git
cd memory-mcp
python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
pip install -e .
cp .env.example .envWindows PowerShell:
git clone https://github.com/dannymaaz/memory-mcp.git
cd memory-mcp
py -m venv .venv
.venv\Scripts\Activate.ps1
pip install -r requirements.txt
pip install -e .
Copy-Item .env.example .envThen:
- Fill in
.envwith your Supabase values. - Run
schema.sqlin Supabase SQL Editor. - Keep the repository in a stable folder.
- Reuse that same folder for all your IDEs and AI clients.
The installed MCP command is the same on Windows, macOS, and Linux:
memory-mcp
Do not copy the server into every project. A single installation is enough.
Recommended pattern:
- one folder for the MCP server,
- one
.envfile inside that folder, - many repos or apps connected to the same memory backend.
Use any stable folder you control. Do not publish personal local paths in public configs or screenshots.
The simplest pattern is to register one command everywhere:
{
"mcpServers": {
"memory-mcp": {
"command": "memory-mcp",
"env": {
"SUPABASE_URL": "https://your-project.supabase.co",
"SUPABASE_KEY": "your-anon-key",
"OWNER_ID": "your-stable-identifier"
}
}
}
}That same block works as the base for Antigravity, OpenCode, Claude Code, Codex, and most other MCP-compatible clients.
Usually no.
If a client is configured to launch memory-mcp, it normally starts the server on demand when the client needs it. In normal use, that means you do not have to manually rerun the server every time you turn on the PC.
You only need to start it yourself when:
- testing the server directly,
- debugging outside the client,
- or using a custom setup that does not automatically spawn MCP servers.
Run OpenCode from the repository root or point it to the included mcp.json:
opencode --mcp-config mcp.jsonPROJECT_MEMORY_INTERFACE=opencode is optional. Use it only if you want to force a client label.
If OpenCode accepts a direct MCP JSON entry, you can paste the same mcpServers.memory-mcp block there.
Register the server using mcp.json or the equivalent Codex MCP settings, then run:
codex --config mcp.jsonRun Claude Code with the shared MCP config:
claude-code --mcp-config mcp.jsonPROJECT_MEMORY_INTERFACE=claude-code is optional. Use it only if you want to force a client label.
Edit the Claude Desktop MCP config file and add a local server entry.
Windows path:
%APPDATA%\Claude\claude_desktop_config.json
macOS path:
~/Library/Application Support/Claude/claude_desktop_config.json
Linux path:
Check your local Claude Desktop app data folder
Example config:
{
"mcpServers": {
"memory-mcp": {
"command": "memory-mcp",
"env": {
"SUPABASE_URL": "https://your-project.supabase.co",
"SUPABASE_KEY": "your-anon-key",
"OWNER_ID": "your-stable-identifier"
}
}
}
}After saving the file, restart Claude Desktop.
If your Antigravity build supports external MCP servers, register the same server there using the same command and environment values:
memory-mcpUse the same mcpServers JSON block as the base config and set the interface to native or antigravity in your client flow.
Common Windows path:
%USERPROFILE%\.gemini\antigravity\mcp_config.json
That means Antigravity can detect the server from a normal MCP JSON config file, just like other clients.
PROJECT_MEMORY_INTERFACE=qwen-code qwen --mcp-config mcp.jsonIn most MCP-compatible clients, you do not have to manually say which tool to call.
If the client exposes Memory MCP tools and tool use is enabled, the model can decide on its own when to call tools like load_unified_context, capture_project_memory, save_cross_interface_decision, update_task_status, or sync_session_state.
Typical natural-language prompts:
- "Resume this project and tell me where we left off."
- "Load the stored project memory before continuing the refactor."
- "Save this architecture decision and mark the current task as in progress."
- "Check active warnings before we keep coding."
- "Save everything important from this session in Memory MCP."
- "If this is a new project, create what you need in Memory MCP and start saving memory automatically."
When the model sees those requests, it can map them to the right MCP tools automatically.
Manual tool calls are still useful when:
- you are debugging an integration,
- you want exact control over the payload,
- or your client does not allow automatic tool use.
If your client disables tool use, the model cannot call MCP tools by itself. In that case, enable MCP/MCP tools in the client or trigger the tool explicitly.
- π§© Automatic project resolution and creation from repository context.
- π Multi-client continuity for OpenCode, Claude Code CLI, Qwen Code, Codex, and native MCP flows.
- πΏ Git-aware memory with repo path, remote, branch, commit, and working tree status.
- π¦ Checkpoints, prompt patterns, file memory, and timeline snapshots for faster resume flows.
- π Semantic memory search with Supabase embeddings plus lexical fallback.
- ποΈ Retention policies, JSON/Markdown export, and import support for backup or migration.
- π Row Level Security across every persistent table.
- π§ͺ Pytest coverage for the server and optimizer.
- π Public bilingual docs optimized for GitHub and Google search.
- Auto-resolves or auto-creates the project when
project_idis omitted. - Detects repository context from git metadata when available.
- Records session summaries and next steps when work stops or switches clients.
- Stores file-level memory and dependency relationships for important modules.
- Detects duplicate tasks, conflicting decisions, and missing file dependencies as warnings.
- Builds a project timeline so you can understand how the work evolved.
Minimal flow User β AI Client β MCP Server β Supabase |
What persists Architecture Decisions Tasks Warnings Preferences Sessions Session state |
- Public docs:
docs/index.html - SEO sitemap:
docs/sitemap.xml - GitHub Pages target:
https://dannymaaz.github.io/memory-mcp/ - Locales:
docs/locales/en.jsonanddocs/locales/es.json
Key tools exposed by the server in src/server.py:
resolve_projectβ auto-detects or auto-creates the current project.create_projectβ creates a project explicitly with repo/workspace metadata.list_projectsβ lists projects for the current owner or workspace.load_unified_contextβ loads optimized durable memory for the current client.save_cross_interface_decisionβ persists architecture or implementation decisions.update_task_statusβ creates or updates tasks and flags duplicates.create_sessionβ opens a tracked session with git context.end_sessionβ closes a session and saves a resume-ready summary.add_warningβ records warnings manually.get_active_warningsβ returns unresolved warnings.sync_session_stateβ stores in-progress work for handoff between clients.get_interface_analyticsβ returns interface usage trends.save_file_memoryβ stores file summaries and dependency edges.save_checkpointβ saves checkpoints for architecture, blockers, and next steps.save_prompt_patternβ stores reusable prompt patterns and response preferences.search_semantic_memoryβ searches memory semantically or lexically.get_project_timelineβ returns a chronological memory timeline.export_memory_bundleβ exports memory to JSON or Markdown.import_memory_bundleβ imports a memory bundle back into a project.resume_projectβ returns a ready-to-use summary to continue work.apply_retention_policyβ stores retention rules and creates archive summaries.
Typical usage now looks like this:
- The client launches
memory-mcp. - The server inspects the current repository context and resolves or creates a project.
load_unified_contextreturns decisions, tasks, warnings, checkpoints, file memory, prompts, and timeline data.- During work, sessions, decisions, prompt patterns, file relationships, and warnings are updated.
- When work ends, the server can save session state, create a checkpoint, and return a resume-ready next step.
- "Use Memory MCP for this project. If you detect important decisions, blockers, tasks, or next steps, save them automatically while we work."
- "Before we finish, save everything important from this session in Memory MCP and leave me the next recommended step."
- "Use Memory MCP while we refactor. Document important files, dependencies, architectural decisions, and task progress automatically."
- "If this repository is new to Memory MCP, create the project automatically and start storing context as we go."
- "Retoma este proyecto con Memory MCP y dame un resumen de lo hecho, lo que falta y el siguiente paso recomendado."
- "When we stop, capture the full session in Memory MCP, including decisions, tasks, warnings, prompts, and a checkpoint summary."
- "Use Memory MCP while implementing this task. Keep warnings, task status, and important file memory synchronized as you code."
- "Before handing control back, save everything important from this coding session in Memory MCP and tell me the next safe step."
- "Use Memory MCP during this refactor. Save architectural decisions, file relationships, and task progress automatically."
- "If this repository is new, create the project in Memory MCP and start capturing context as we modify the codebase."
examples/antigravity/README.mdexamples/claude-desktop/README.mdexamples/opencode/README.mdexamples/claude-code/README.mdexamples/qwen-plugin/README.mdexamples/codex-plugin/README.mdexamples/native-chat/README.md
- Uses Memory MCP, AI project memory, and Supabase persistent context in high-signal sections.
- Keeps core keywords near the top for GitHub search and repository previews.
- Ships Open Graph, Twitter Card, JSON-LD, canonical URL, hreflang, and sitemap for Google indexing.
- Includes bilingual docs and MCP-oriented examples for broader discoverability.
OWNER_ID is a stable identifier you choose for yourself or your team. It is not created by Supabase. Good values include your GitHub username, a company slug, or a workspace id.
No, not for normal MCP usage. SUPABASE_URL and SUPABASE_KEY are enough for the server. DATABASE_URL is only useful if you also want direct Postgres access for SQL scripts or admin tooling.
No. Memory MCP now tries to resolve the project automatically from repository context and can create it when missing.
Yes, with a fallback. The server can always do lexical search. If you also store embeddings in Supabase, search_semantic_memory can rank results semantically.
Usually no. MCP-compatible clients normally spawn memory-mcp on demand once they are configured with the command.
Yes. Memory MCP now includes capture_project_memory, a high-level tool designed for prompts like:
- "Save everything important from this session in Memory MCP."
- "Store all of this in your memory: decisions, tasks, blockers, next steps, and important files."
When the model uses that tool well, it can persist multiple artifacts in one call: decisions, tasks, warnings, file memory, prompt patterns, session state, and a checkpoint summary.
Memory MCP tries to resolve the active project from repository context. In clients that expose the current workspace or repository path, it can auto-detect which project is active and create storage automatically if it is new.
If your client does not expose the correct repo path, the model can still pass a repo_path explicitly to resolve_project or capture_project_memory.
Architecture preview |
Brand mark |
If Memory MCP helps your workflow, you can support development here:
See CONTRIBUTING.md for setup, style, PR process, and issue reporting.
- Open a GitHub issue for bugs, ideas, or integration notes.
- Use the docs site to onboard collaborators quickly.
- Extend the schema and examples as your AI workflows grow.
Memory MCP, AI project memory, Supabase persistent context, AI agent memory, OpenCode memory, Claude Code CLI memory, Qwen Code memory, Codex memory, multi-interface AI, context optimization, Danny Maaz.