Persistent AI memory: capture thoughts, generate embeddings with Ollama, store in Postgres (pgvector), and retrieve via semantic search. Includes an MCP (Model Context Protocol) server exposing capture_memory and search_memories tools.
AI Client → MCP Server → Memory API → (Ollama embeddings) → Postgres (pgvector)
- Docker + Docker Compose
- Ollama running on the host (default:
http://localhost:11434)- Ensure an embedding model is available, e.g.
ollama pull nomic-embed-text
- Ensure an embedding model is available, e.g.
cp .env.example .env
docker compose up -d --buildAPI will be on http://localhost:8000 and MCP on http://localhost:8080/mcp.
curl -sS -X POST http://localhost:8000/capture \
-H 'Content-Type: application/json' \
-d '{"content":"example memory","source":"manual"}' | catcurl -sS 'http://localhost:8000/search?query=example&limit=5' | catNotes:
/capturealso returns aclassificationobject (category + confidence)./searchresults may includeclassificationmetadata when available.
Enable the Matrix bot service (requires access token + room ID):
docker compose --profile matrix up -d --buildSet these in .env:
MATRIX_HOMESERVERMATRIX_USER_IDMATRIX_ACCESS_TOKENMATRIX_ROOM_ID
Then post a message in the configured room — the bot will store it and reply with the stored ID + category.
This server implements MCP Streamable HTTP at:
POST http://localhost:8080/mcp
Supported MCP methods:
initializepingtools/listtools/call
Tools:
capture_memory{ content: string, source?: string }search_memories{ query: string, limit?: number }
If your client supports MCP Streamable HTTP, configure the MCP endpoint URL as:
http://localhost:8080/mcp
Claude Desktop typically runs MCP servers over stdio (a local command). This repo includes a stdio entrypoint inside the MCP Docker image.
Example claude_desktop_config.json snippet:
{
"mcpServers": {
"synapse": {
"command": "docker",
"args": [
"run",
"--rm",
"-i",
"-e",
"API_BASE_URL=http://host.docker.internal:8000",
"synapse-mcp",
"python",
"/app/stdio_server.py"
]
}
}
}Notes:
- Ensure the Synapse stack is running (
docker compose up -d --build). - On Linux, replace
host.docker.internalwith your host IP or another reachable hostname.
By default we use OLLAMA_HOST=host.docker.internal (works on Docker Desktop). On Linux, set OLLAMA_HOST to your host IP (or run Ollama in Docker).
The database schema uses VECTOR(768). If your embedding model returns a different dimension, /capture and /search will return a 500 with a clear error. Use a 768-dim model (default nomic-embed-text) or adjust the schema + code together.
If you changed the schema and need to re-run init, remove the volume:
docker compose down -vMIT (see LICENSE).