Skip to content

philbudden/synapse

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

18 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

🧠 Synapse

Persistent AI memory: capture thoughts, generate embeddings with Ollama, store in Postgres (pgvector), and retrieve via semantic search. Includes an MCP (Model Context Protocol) server exposing capture_memory and search_memories tools.

AI Client → MCP Server → Memory API → (Ollama embeddings) → Postgres (pgvector)

Prerequisites

  • Docker + Docker Compose
  • Ollama running on the host (default: http://localhost:11434)
    • Ensure an embedding model is available, e.g. ollama pull nomic-embed-text

Quick start

cp .env.example .env
docker compose up -d --build

API will be on http://localhost:8000 and MCP on http://localhost:8080/mcp.

Test memory capture

curl -sS -X POST http://localhost:8000/capture \
  -H 'Content-Type: application/json' \
  -d '{"content":"example memory","source":"manual"}' | cat

Test semantic search

curl -sS 'http://localhost:8000/search?query=example&limit=5' | cat

Notes:

  • /capture also returns a classification object (category + confidence).
  • /search results may include classification metadata when available.

Matrix ingestion (optional)

Enable the Matrix bot service (requires access token + room ID):

docker compose --profile matrix up -d --build

Set these in .env:

  • MATRIX_HOMESERVER
  • MATRIX_USER_ID
  • MATRIX_ACCESS_TOKEN
  • MATRIX_ROOM_ID

Then post a message in the configured room — the bot will store it and reply with the stored ID + category.

MCP tools

This server implements MCP Streamable HTTP at:

  • POST http://localhost:8080/mcp

Supported MCP methods:

  • initialize
  • ping
  • tools/list
  • tools/call

Tools:

  • capture_memory { content: string, source?: string }
  • search_memories { query: string, limit?: number }

MCP client integration

ChatGPT connectors / HTTP MCP clients

If your client supports MCP Streamable HTTP, configure the MCP endpoint URL as:

  • http://localhost:8080/mcp

Claude Desktop (stdio)

Claude Desktop typically runs MCP servers over stdio (a local command). This repo includes a stdio entrypoint inside the MCP Docker image.

Example claude_desktop_config.json snippet:

{
  "mcpServers": {
    "synapse": {
      "command": "docker",
      "args": [
        "run",
        "--rm",
        "-i",
        "-e",
        "API_BASE_URL=http://host.docker.internal:8000",
        "synapse-mcp",
        "python",
        "/app/stdio_server.py"
      ]
    }
  }
}

Notes:

  • Ensure the Synapse stack is running (docker compose up -d --build).
  • On Linux, replace host.docker.internal with your host IP or another reachable hostname.

Troubleshooting

Ollama unreachable from Docker

By default we use OLLAMA_HOST=host.docker.internal (works on Docker Desktop). On Linux, set OLLAMA_HOST to your host IP (or run Ollama in Docker).

Embedding dimension mismatch

The database schema uses VECTOR(768). If your embedding model returns a different dimension, /capture and /search will return a 500 with a clear error. Use a 768-dim model (default nomic-embed-text) or adjust the schema + code together.

Postgres/pgvector init

If you changed the schema and need to re-run init, remove the volume:

docker compose down -v

License

MIT (see LICENSE).

About

🧠 Local-first persistent AI memory: capture thoughts, generate embeddings with Ollama, store in Postgres (pgvector), and retrieve via semantic search.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors