3D visualization of embedding spaces from claude-memory - explore your Claude Code conversations in semantic space.
This visualizer works with data from claude-memory. You'll need:
-
Bun - Fast JavaScript runtime
curl -fsSL https://bun.sh/install | bash -
claude-memory installed and synced - Follow the claude-memory setup guide
-
Python dependencies for the export script (or use claude-memory's venv):
pip install chromadb umap-learn scikit-learn numpy # Or if you have claude-memory installed: # ~/dev/claude-memory/.venv/bin/python scripts/export-chromadb.py
-
Ollama (optional) - For AI summaries when box-selecting points:
ollama pull qwen2.5:1.5b # or any text model
# Clone the repo
git clone https://github.com/jopnelli/claude-memory-visualizer
cd claude-memory-visualizer
# Install dependencies
bun install
# Export your claude-memory data (skip if just trying the demo)
python scripts/export-chromadb.py
# Start dev server
bun run devThe visualizer auto-loads your Claude Memory data if available, otherwise shows a demo dataset.
- 3D Point Cloud - Visualize thousands of conversation embeddings in 3D space
- Pre-computed Projections - UMAP, t-SNE, PCA computed server-side for instant switching
- Semantic Search - Find conversations by meaning using
all-mpnet-base-v2embeddings - Box Selection - Cmd+drag to select regions, get AI summaries via Ollama
- Time-based Coloring - Older conversations in blue, newer in red
- Inspector Panel - Hover/click to see full document text and metadata
The Python export script (scripts/export-chromadb.py) reads from ChromaDB and pre-computes all three projections:
# Basic usage (reads from ~/.claude-memory/chroma)
python scripts/export-chromadb.py
# Custom options
python scripts/export-chromadb.py \
--chroma-path /path/to/chroma \
--output public/data/my-data.json \
--limit 1000Output format:
{
"metadata": {
"name": "Claude Memory",
"embedding_model": "sentence-transformers/all-mpnet-base-v2",
"embedding_dim": 768,
"count": 3486
},
"documents": [...],
"projections": {
"umap": [[x, y, z], ...],
"tsne": [[x, y, z], ...],
"pca": [[x, y, z], ...]
}
}- Claude Memory - Your exported claude-memory data with full semantic search
- Demo Dataset - A small sample dataset to explore the interface (text search only)
- Three.js - 3D rendering with point clouds and OrbitControls
- DRUIDJS - Browser-side dimensionality reduction (fallback if no pre-computed projections)
- Transformers.js - In-browser
all-mpnet-base-v2embeddings for semantic search (~420MB, cached) - Ollama - Local AI summaries for box selection (optional)
- Bun + Vite - Dev server and bundling
- TypeScript - Type-safe codebase
| Algorithm | Speed | Quality | Best For |
|---|---|---|---|
| UMAP | Fast | High | Default choice, good balance |
| t-SNE | Slow | High | Tight, well-separated clusters |
| PCA | Instant | Medium | Quick overview, may have overlap |
- Turn-level chunking: claude-memory chunks at individual turns (1 user + 1 assistant message), not full conversations. Search may miss context that spans multiple turns.
- First search delay: Downloads ~420MB embedding model on first search (cached after)
- Best for <10k docs: Larger datasets may be slow to render
# Start dev server
bun run dev
# Re-export data after claude-memory changes
python scripts/export-chromadb.pyBy default, semantic search downloads a ~420MB embedding model to the browser on first use. For faster search, run the local embedding server:
pip install sentence-transformers
python scripts/embed-server.pyTip: If you have claude-memory installed, its venv already includes sentence-transformers.
The visualizer auto-detects the local server and uses it for faster search.
MIT
