Complete guide for deploying MemoryGraph in any environment.
- Installation Methods
- Configuration
- Backend Selection
- Tool Profiles
- Environment Variables
- Docker Deployment
- Migration Guide
- Troubleshooting
- Production Checklist
Core Mode (Default)
pip install memorygraphMCP- SQLite backend
- 9 core tools
- Zero configuration
- Best for: Getting started, personal use
Extended Mode
pip install "memorygraphMCP[neo4j]"
# or
pip install "memorygraphMCP[falkordblite]" # FalkorDBLite: embedded with Cypher
# or
pip install "memorygraphMCP[falkordb]" # FalkorDB: client-server with high performance
# or
pip install "memorygraphMCP[all]"- SQLite/FalkorDBLite/FalkorDB/Neo4j/Memgraph backend
- 11 tools (core + advanced)
- Complex queries and analytics
- Best for: Power users, production
SQLite Mode
# Clone repository
git clone https://github.com/gregorydickson/memory-graph.git
cd memory-graph
# Start with Docker Compose
docker compose up -dNeo4j Mode
docker compose -f docker-compose.neo4j.yml up -dMemgraph Mode
docker compose -f docker-compose.full.yml up -d# Clone repository
git clone https://github.com/gregorydickson/memory-graph.git
cd memory-graph
# Install in development mode
pip install -e .
# Or with all features
pip install -e ".[all,dev]"Use Cases:
- Quick testing without installation
- CI/CD pipelines and automation
- Version testing and comparison
- One-time operations
- Trying before installing
Installation:
# Install uv (one time only)
pip install uv
# or via curl
curl -LsSf https://astral.sh/uv/install.sh | shUsage:
# Check version
uvx memorygraph --version
# Show configuration
uvx memorygraph --show-config
# Health check
uvx memorygraph --health
# Run server (ephemeral)
uvx memorygraph --backend sqlite --profile extended
# Test specific version
uvx memorygraph@1.0.0 --versionLimitations:
⚠️ First run slower - Downloads and caches package from PyPI (~5-10 seconds)⚠️ Not recommended for persistent MCP servers - Better to use pip install⚠️ Requires explicit database path for data persistence:MEMORY_SQLITE_PATH=~/.memorygraph/memory.db uvx memorygraph⚠️ Environment variables must be set per invocation (no persistent config)
CI/CD Example (GitHub Actions):
name: Test Memory Server
on: [push]
jobs:
test:
runs-on: ubuntu-latest
steps:
- name: Install uv
run: pip install uv
- name: Test memory server
run: |
uvx memorygraph --version
uvx memorygraph --show-config
- name: Test specific version
run: uvx memorygraph@1.0.0 --healthGitLab CI Example:
test_memory:
stage: test
script:
- pip install uv
- uvx memorygraph --version
- uvx memorygraph --healthDocker Build Example:
FROM python:3.11-slim
# Install uv
RUN pip install uv
# Use uvx to run one-time operations without installing
RUN uvx memorygraph --show-config
# For persistent server, use pip install instead
RUN pip install memorygraphMCPWhen to Use uvx vs pip install:
| Scenario | Use uvx | Use pip install |
|---|---|---|
| Quick testing | ✅ Yes | ❌ Overkill |
| MCP server (daily use) | ❌ No | ✅ Yes |
| CI/CD automation | ✅ Yes | Either works |
| Version comparison | ✅ Yes | Manual switching |
| Production deployment | ❌ No | ✅ Yes |
According to the official Claude Code documentation, the recommended and official way to configure MCP servers for Claude Code CLI is using the claude mcp add command. Manual JSON editing is not the intended workflow for CLI users.
Note: These instructions are specific to Claude Code CLI. For other Claude Code interfaces (VS Code extension, Desktop app, Web), see CLAUDE_CODE_SETUP.md for interface-specific instructions.
Claude Code uses multiple configuration files with different purposes. This is admittedly messy, and Anthropic is aware of the documentation issues.
| File | Purpose | What Goes Here |
|---|---|---|
.mcp.json |
Project MCP servers | Server configurations for specific project (created by claude mcp add --scope project) |
~/.claude.json |
Global MCP servers (legacy) | User-level server configurations (managed by claude mcp add) |
~/.claude/settings.json |
Permissions & behavior | enabledMcpjsonServers, environment variables, tool behavior settings |
✅ DO:
- Use
claude mcp addcommand (official method) - Let the CLI manage configuration files for you
- Use
--scope projectfor project-specific servers - Use default (user-level) for servers available across all projects
❌ DON'T:
- Put MCP servers in
~/.claude/settings.json- it won't work - Manually edit
.mcp.jsonor~/.claude.jsonunless absolutely necessary - Try to manually manage the "chaotic grab bag" of legacy global settings
Why this matters: The configuration system is complex and has legacy files. Using claude mcp add ensures your MCP servers are configured in the correct location and format.
Prerequisites: You must have already installed MemoryGraph via pip (see Installation Methods above). The claude mcp add command configures Claude Code to use the already-installed memorygraph command.
These examples use claude mcp add which is CLI-specific. For VS Code extension, Desktop app, or Web, see CLAUDE_CODE_SETUP.md.
# Prerequisite: pip install memorygraphMCP (must be run first)
claude mcp add --transport stdio memorygraph memorygraphUses:
- SQLite backend
- Core profile (9 tools)
- Default database path:
~/.memorygraph/memory.db - Available across all projects
# Prerequisite: pip install memorygraphMCP (must be run first)
claude mcp add --transport stdio memorygraph memorygraph --scope projectCreates .mcp.json in your project root.
# Prerequisite: pip install "memorygraphMCP[neo4j]" (must be run first)
claude mcp add --transport stdio memorygraph memorygraph --profile extended# Prerequisite: pip install "memorygraphMCP[neo4j,intelligence]" (must be run first)
claude mcp add --transport stdio memorygraph memorygraph --profile extended --backend neo4j \
--env MEMORY_NEO4J_URI=bolt://localhost:7687 \
--env MEMORY_NEO4J_USER=neo4j \
--env MEMORY_NEO4J_PASSWORD=your-password# List all MCP servers
claude mcp list
# Get details for memorygraph
claude mcp get memorygraphFor Claude Code CLI users: Use claude mcp add instead (see above).
For other Claude Code interfaces (VS Code extension, Desktop app): Manual configuration is required as the claude mcp add command is CLI-specific. See CLAUDE_CODE_SETUP.md for interface-specific instructions.
For other MCP clients (Cursor, Continue, etc.): Use the manual JSON configuration below.
If you need to manually configure (for non-Claude Code clients):
{
"mcpServers": {
"memorygraph": {
"command": "memorygraph"
}
}
}Uses:
- SQLite backend
- Core profile (9 tools)
- Default database path:
~/.memorygraph/memory.db
{
"mcpServers": {
"memorygraph": {
"command": "memorygraph",
"args": ["--profile", "extended"]
}
}
}{
"mcpServers": {
"memorygraph": {
"command": "memorygraph",
"args": ["--backend", "neo4j", "--profile", "extended"],
"env": {
"MEMORY_NEO4J_URI": "bolt://localhost:7687",
"MEMORY_NEO4J_USER": "neo4j",
"MEMORY_NEO4J_PASSWORD": "your-password"
}
}
}
}MemoryGraph supports 5 backend options:
- sqlite - Embedded, default, zero-config
- falkordblite - Embedded, zero-config with native Cypher/graph support
- falkordb - Client-server, user manages FalkorDB
- neo4j - Client-server, enterprise
- memgraph - Client-server, real-time analytics
When to Use:
- Getting started
- Personal projects
- <10k memories
- No setup time
- Portable database
Configuration:
export MEMORY_BACKEND=sqlite
export MEMORY_SQLITE_PATH=~/.memorygraph/memory.dbPros:
- Zero configuration
- No dependencies
- Portable file-based
- Fast for small datasets
Cons:
- Slower graph queries at scale
- Limited concurrent writes
- Manual relationship traversal
When to Use:
- Want native Cypher query support
- Need better graph traversal than SQLite
- Prefer embedded (no server setup)
- <10k memories
- Zero-config like SQLite
Configuration:
export MEMORY_BACKEND=falkordblite
export MEMORY_FALKORDBLITE_PATH=~/.memorygraph/falkordblite.db # Optional, this is defaultInstallation:
pip install "memorygraphMCP[falkordblite]"
# macOS users may need libomp:
brew install libompPros:
- Zero configuration (embedded)
- Native Cypher support
- Better graph performance than SQLite
- No server management
- Portable file-based storage
Cons:
- Requires additional dependency (falkordblite)
- macOS may need libomp installation
- Not suitable for >10k memories
When to Use:
- Production deployments
- Need high-performance graph operations
-
10k memories
- 500x faster p99 than Neo4j
- Team collaboration
Setup:
# Option 1: Docker
docker run -d \
--name falkordb \
-p 6379:6379 \
falkordb/falkordb:latest
# Option 2: Docker with password
docker run -d \
--name falkordb \
-p 6379:6379 \
-e FALKORDB_PASSWORD=your-password \
falkordb/falkordb:latest
# Configure
export MEMORY_BACKEND=falkordb
export MEMORY_FALKORDB_HOST=localhost
export MEMORY_FALKORDB_PORT=6379
export MEMORY_FALKORDB_PASSWORD=your-password # If set aboveInstallation:
pip install "memorygraphMCP[falkordb]"Pros:
- Exceptional performance (500x faster p99 than Neo4j)
- Redis-based (familiar to many teams)
- Native Cypher query language
- Production-ready
- Excellent for high-throughput workloads
Cons:
- Requires FalkorDB server (user manages deployment)
- Setup more complex than embedded options
- Needs network configuration
Documentation: FalkorDB Docs
When to Use:
- Production deployments
- Team collaboration
-
10k memories
- Complex graph analytics
- Rich query requirements
Setup:
# Docker
docker run -d \
--name neo4j \
-p 7474:7474 \
-p 7687:7687 \
-e NEO4J_AUTH=neo4j/password \
neo4j:5-community
# Configure
export MEMORY_BACKEND=neo4j
export MEMORY_NEO4J_URI=bolt://localhost:7687
export MEMORY_NEO4J_USER=neo4j
export MEMORY_NEO4J_PASSWORD=passwordPros:
- Industry-standard graph DB
- Excellent query performance
- Rich tooling (Browser, Bloom)
- Cypher query language
- Strong community
Cons:
- Requires setup
- Resource intensive
- Learning curve
When to Use:
- High-performance analytics
- Real-time queries
- Large-scale graphs
- In-memory processing
Setup:
# Docker
docker run -d \
--name memgraph \
-p 7687:7687 \
-p 3000:3000 \
memgraph/memgraph-platform
# Configure
export MEMORY_BACKEND=memgraph
export MEMORY_MEMGRAPH_URI=bolt://localhost:7687Pros:
- Fastest graph analytics
- In-memory processing
- Cypher compatible
- Built for scale
Cons:
- Requires setup
- Higher memory usage
- Smaller ecosystem
Tools: 9 core tools Backend: SQLite Setup Time: 30 seconds Best For: Getting started, daily use (95% of users)
memorygraph
# or explicitly
memorygraph --profile coreAvailable Tools:
- store_memory, recall_memories, search_memories
- get_memory, update_memory, delete_memory
- create_relationship, get_related_memories
- get_session_briefing
Tools: 11 tools (core + advanced) Backend: SQLite/Neo4j/Memgraph Setup Time: 30 seconds (SQLite) or 5 minutes (Neo4j/Memgraph) Best For: Power users, advanced analytics
memorygraph --profile extended
# or with Neo4j
memorygraph --profile extended --backend neo4jAdditional Tools (beyond core):
- get_memory_statistics - Database statistics
- analyze_relationship_patterns - Advanced relationship analysis
See TOOL_PROFILES.md for complete list.
# Backend selection (required)
export MEMORY_BACKEND=sqlite # sqlite | falkordblite | falkordb | neo4j | memgraph
# Tool profile (optional, default: core)
export MEMORY_TOOL_PROFILE=core # core | extended
# Logging (optional, default: INFO)
export MEMORY_LOG_LEVEL=INFO # DEBUG | INFO | WARNING | ERROR# Database file path (optional)
export MEMORY_SQLITE_PATH=~/.memorygraph/memory.db
# WAL mode (optional, default: true)
export MEMORY_SQLITE_WAL_MODE=true
# Cache size (optional, default: -64000 = 64MB)
export MEMORY_SQLITE_CACHE_SIZE=-64000# Database file path (optional, default: ~/.memorygraph/falkordblite.db)
export MEMORY_FALKORDBLITE_PATH=~/.memorygraph/falkordblite.db
# Or use short form:
export FALKORDBLITE_PATH=~/.memorygraph/falkordblite.db# Connection host (required if backend=falkordb)
export MEMORY_FALKORDB_HOST=localhost
export FALKORDB_HOST=localhost # Alternative
# Connection port (optional, default: 6379)
export MEMORY_FALKORDB_PORT=6379
export FALKORDB_PORT=6379 # Alternative
# Authentication (optional)
export MEMORY_FALKORDB_PASSWORD=your-password
export FALKORDB_PASSWORD=your-password # Alternative# Connection URI (required if backend=neo4j)
export MEMORY_NEO4J_URI=bolt://localhost:7687
# Authentication (required)
export MEMORY_NEO4J_USER=neo4j
export MEMORY_NEO4J_PASSWORD=your-password
# Connection pool (optional)
export MEMORY_NEO4J_MAX_POOL_SIZE=50
export MEMORY_NEO4J_CONNECTION_TIMEOUT=30
# Database name (optional, default: neo4j)
export MEMORY_NEO4J_DATABASE=neo4j# Connection URI (required if backend=memgraph)
export MEMORY_MEMGRAPH_URI=bolt://localhost:7687
# Authentication (optional, default: no auth)
export MEMORY_MEMGRAPH_USER=memgraph
export MEMORY_MEMGRAPH_PASSWORD=your-password
# Connection pool (optional)
export MEMORY_MEMGRAPH_MAX_POOL_SIZE=50# Embedding model (optional, default: all-MiniLM-L6-v2)
export MEMORY_EMBEDDING_MODEL=all-MiniLM-L6-v2
# SpaCy model (optional, default: en_core_web_sm)
export MEMORY_SPACY_MODEL=en_core_web_sm
# Context token limit (optional, default: 4000)
export MEMORY_CONTEXT_TOKEN_LIMIT=4000version: '3.8'
services:
memorygraph:
build: .
stdin_open: true
tty: true
environment:
- MEMORY_BACKEND=sqlite
- MEMORY_SQLITE_PATH=/data/memory.db
volumes:
- memory_data:/data
volumes:
memory_data:Usage:
docker compose up -d
docker compose logs -fversion: '3.8'
services:
neo4j:
image: neo4j:5-community
ports:
- "7474:7474"
- "7687:7687"
environment:
- NEO4J_AUTH=neo4j/password
- NEO4J_dbms_memory_heap_max__size=2g
- NEO4J_dbms_memory_pagecache_size=1g
volumes:
- neo4j_data:/data
memorygraph:
build: .
depends_on:
- neo4j
stdin_open: true
tty: true
environment:
- MEMORY_BACKEND=neo4j
- MEMORY_TOOL_PROFILE=extended
- MEMORY_NEO4J_URI=bolt://neo4j:7687
- MEMORY_NEO4J_USER=neo4j
- MEMORY_NEO4J_PASSWORD=password
volumes:
neo4j_data:Usage:
docker compose -f docker-compose.neo4j.yml up -d
# Access Neo4j Browser
open http://localhost:7474version: '3.8'
services:
memgraph:
image: memgraph/memgraph-platform
ports:
- "7687:7687"
- "3000:3000"
volumes:
- memgraph_data:/var/lib/memgraph
memorygraph:
build: .
depends_on:
- memgraph
stdin_open: true
tty: true
environment:
- MEMORY_BACKEND=memgraph
- MEMORY_TOOL_PROFILE=extended
- MEMORY_MEMGRAPH_URI=bolt://memgraph:7687
volumes:
memgraph_data:Usage:
docker compose -f docker-compose.full.yml up -d
# Access Memgraph Lab
open http://localhost:3000No migration needed:
# Before
memorygraph --profile core
# After
memorygraph --profile extendedExtended mode adds 2 additional tools (get_memory_statistics, analyze_relationship_patterns) but uses the same database.
1. Export SQLite data (when implemented):
memorygraph --backend sqlite --export backup.json2. Set up Neo4j:
docker run -d \
--name neo4j \
-p 7474:7474 -p 7687:7687 \
-e NEO4J_AUTH=neo4j/password \
neo4j:5-community3. Import to Neo4j (when implemented):
memorygraph --backend neo4j --import backup.json4. Update MCP config:
{
"command": "memorygraph",
"args": ["--backend", "neo4j", "--profile", "extended"],
"env": {
"MEMORY_NEO4J_URI": "bolt://localhost:7687",
"MEMORY_NEO4J_USER": "neo4j",
"MEMORY_NEO4J_PASSWORD": "password"
}
}If export/import not available:
1. Get all memories from SQLite:
Use search_memories with no filters to get all IDs, then get_memory for each.
2. Store in Neo4j:
Use store_memory for each memory in the new backend.
3. Recreate relationships:
Get all relationships from SQLite and recreate with create_relationship.
Check configuration:
memorygraph --show-configCheck backend connection:
# SQLite
ls -lah ~/.memorygraph/
# Neo4j
docker ps | grep neo4j
docker logs neo4j
# Memgraph
docker ps | grep memgraph
docker logs memgraphCheck logs:
memorygraph --log-level DEBUGSQLite locked:
# Check for running processes
ps aux | grep memorygraph
# Remove lock file (if safe)
rm ~/.memorygraph/memory.db-lock
# Check permissions
ls -la ~/.memorygraph/Neo4j connection refused:
# Verify Neo4j is running
docker ps | grep neo4j
# Check ports
netstat -an | grep 7687
# Verify credentials
memorygraph --backend neo4j --show-config
# Test connection manually
cypher-shell -a bolt://localhost:7687 -u neo4j -p passwordMemgraph connection refused:
# Verify Memgraph is running
docker ps | grep memgraph
# Restart if needed
docker restart memgraph
# Check logs
docker logs memgraphSQLite slow queries:
# Check database size
ls -lh ~/.memorygraph/memory.db
# Vacuum database
sqlite3 ~/.memorygraph/memory.db "VACUUM;"
# Consider upgrading to Neo4j
memorygraph --backend neo4j --profile extendedNeo4j out of memory:
# Increase heap size
docker run -e NEO4J_dbms_memory_heap_max__size=4g ...
# Check current memory usage
# In Neo4j Browser: :sysinfoMemgraph high memory usage:
# Check memory limit
docker inspect memgraph | grep Memory
# Increase limit
docker run -e MEMGRAPH_MEM_LIMIT=8192 ...Check profile:
memorygraph --show-config
# Verify tool_profile is correctUpgrade profile:
# From core to extended
memorygraph --profile extendedCheck tool exists:
# List all tools (in server logs)
memorygraph --log-level DEBUG
# Look for "Registered X/11 tools" (9 for core, 11 for extended)Enable debug logging:
export MEMORY_LOG_LEVEL=DEBUG
memorygraphOr via CLI:
memorygraph --log-level DEBUGVerify configuration:
memorygraph --show-configTest backend connection (when implemented):
memorygraph --healthCheck version:
memorygraph --version- Choose appropriate backend (SQLite/Neo4j/Memgraph)
- Select tool profile (core/extended)
- Configure environment variables
- Set up database (if Neo4j/Memgraph)
- Test connection locally
- Configure MCP integration
- Test with Claude Code
- Use strong passwords (Neo4j/Memgraph)
- Enable TLS/SSL for remote connections
- Restrict network access (firewall)
- Regular security updates
- Backup encryption (if sensitive data)
- Audit access logs
- Tune database memory settings
- Create appropriate indexes
- Monitor query performance
- Set up performance alerts
- Plan for scaling
- Log aggregation (ELK, Splunk)
- Metrics collection (Prometheus)
- Alerting (PagerDuty, Slack)
- Database monitoring
- Resource usage tracking
- Automated daily backups
- Test restore procedures
- Off-site backup storage
- Retention policy (7/30/90 days)
- Disaster recovery plan
- Document configuration
- Document backup procedures
- Document troubleshooting steps
- Document team access
- Document upgrade path
Recommendation: SQLite, core profile
Step 1: Install
pip install memorygraphMCPStep 2: Configure MCP (Claude Code CLI):
claude mcp add --transport stdio memorygraph memorygraphStep 2: Configure MCP (Manual):
{
"mcpServers": {
"memorygraph": {
"command": "memorygraph"
}
}
}Recommendation: Neo4j, extended profile
Step 1: Server setup
docker run -d --name neo4j \
-p 7474:7474 -p 7687:7687 \
-e NEO4J_AUTH=neo4j/strong-password \
neo4j:5-communityStep 2: Each team member installs
pip install "memorygraphMCP[neo4j]"Step 3: Configure MCP (Claude Code CLI - team members):
claude mcp add --transport stdio memorygraph memorygraph --profile extended --backend neo4j \
--env MEMORY_NEO4J_URI=bolt://team-server:7687 \
--env MEMORY_NEO4J_USER=neo4j \
--env MEMORY_NEO4J_PASSWORD=strong-passwordStep 3: Configure MCP (Manual - team members):
{
"mcpServers": {
"memorygraph": {
"command": "memorygraph",
"args": ["--backend", "neo4j", "--profile", "extended"],
"env": {
"MEMORY_NEO4J_URI": "bolt://team-server:7687",
"MEMORY_NEO4J_USER": "neo4j",
"MEMORY_NEO4J_PASSWORD": "strong-password"
}
}
}
}Recommendation: Neo4j Aura, extended profile
Step 1: Create Neo4j Aura instance:
- Go to https://neo4j.com/cloud/aura/
- Create free instance
- Save connection details
Step 2: Install locally:
pip install "memorygraphMCP[neo4j]"Step 3: Configure MCP (Claude Code CLI):
claude mcp add --transport stdio memorygraph memorygraph --profile extended --backend neo4j \
--env MEMORY_NEO4J_URI=neo4j+s://your-instance.neo4j.io \
--env MEMORY_NEO4J_USER=neo4j \
--env MEMORY_NEO4J_PASSWORD=your-passwordStep 3: Configure MCP (Manual):
{
"mcpServers": {
"memorygraph": {
"command": "memorygraph",
"args": ["--backend", "neo4j", "--profile", "extended"],
"env": {
"MEMORY_NEO4J_URI": "neo4j+s://your-instance.neo4j.io",
"MEMORY_NEO4J_USER": "neo4j",
"MEMORY_NEO4J_PASSWORD": "your-password"
}
}
}
}| Memories | Query Time | Storage | RAM |
|---|---|---|---|
| 1,000 | <50ms | 5MB | 50MB |
| 10,000 | <100ms | 50MB | 100MB |
| 100,000 | <500ms | 500MB | 200MB |
| Memories | Query Time | Storage | RAM |
|---|---|---|---|
| 1,000 | <10ms | 10MB | 500MB |
| 10,000 | <50ms | 100MB | 1GB |
| 100,000 | <100ms | 1GB | 2GB |
| Memories | Query Time | Storage | RAM |
|---|---|---|---|
| 1,000 | <5ms | 10MB | 200MB |
| 10,000 | <20ms | 100MB | 2GB |
| 100,000 | <50ms | 1GB | 4GB |
- Choose your deployment method
- Set up your backend
- Configure environment variables
- Test the connection
- Integrate with Claude Code
- Monitor and optimize
For more help:
- TOOL_PROFILES.md - Tool reference
- CLAUDE_CODE_SETUP.md - Claude Code integration
- archive/FULL_MODE_LEGACY.md - Legacy documentation (archived)
- GitHub Issues - Support
Last Updated: November 28, 2025