A personal AI assistant with a brain-inspired memory system that forgets, consolidates, reconsolidates, detects contradictions, and abstracts behavioral patterns — capabilities no existing AI memory system (MemGPT, Mem0, Zep, or A-Mem) has shipped in production.
Every AI assistant today has the same memory flaw: it stores everything and retrieves by similarity. This is a filing cabinet, not a brain. As conversations accumulate, the system drowns in redundant facts, conflicting information, and outdated priorities — with no mechanism to resolve any of it.
The human brain solves this with five mechanisms that run simultaneously:
- Two-stage consolidation — fast capture in the hippocampus, slow distillation to the neocortex during sleep
- Selective forgetting — memories decay unless reinforced through spaced retrieval
- Reconsolidation — retrieved memories become temporarily unstable and can be refined by new context
- Cognitive dissonance — contradicting beliefs trigger error signals proportional to the conflict
- Hierarchical compression — raw experience is compressed 2,000,000:1 into schemas and generative models
We implemented all five.
Raw conversation --> Episodic Memory --> Semantic Memory --> Schema
(fast capture) (distilled facts) (behavioral patterns)
| Layer | Example | Created |
|---|---|---|
| Episodic | "On March 15, user said delay the launch because engineering is behind" | After each conversation turn |
| Semantic | "User delayed product launch due to engineering delays (March 2026)" | During consolidation (every 6h) |
| Schema | "User prioritizes engineering readiness over market timing in launch decisions" | When 3+ semantic memories cluster |
Every memory has a strength
1. Idempotent time decay — target strength is a pure function of elapsed time (no compounding error across consolidation cycles):
where
2. Spacing-aware retrieval reinforcement — boost scales with time since last access and diminishes near the ceiling:
where
| Scenario | Flat boost (old) | Spacing-aware (new) |
|---|---|---|
| Recalled 30 sec ago, |
|
|
| Recalled 1 day ago, |
||
| Recalled 7 days ago, |
|
|
| Recalled 7 days ago, |
|
3. Evidence-weighted contradiction (Rescorla-Wagner 1972) — penalty modulated by relative strength of new evidence vs old belief:
where
Memories below
A scheduled background daemon runs every
-
Clustering: DBSCAN on cosine distance matrix
$D_{ij} = 1 - \cos(\mathbf{e}_i, \mathbf{e}_j)$ with$\varepsilon = 0.35$ ,min_samples$= 3$ -
Distillation: Each cluster
$C_k$ with$|C_k| \geq 3$ is distilled by Claude Haiku into one semantic memory - Centrality-weighted decay: Source episodics fade based on distance from cluster centroid — $\gamma_i = 0.5 + 0.4 \cdot (1 - \text{sim}(\mathbf{e}i, \bar{\mathbf{e}}{C_k}))$ — central memories fade more, peripheral ones retain unique details
-
Schema synthesis: Re-cluster semantics (
$\varepsilon = 0.45$ ), synthesize behavioral patterns from clusters of$\geq 3$ - Idempotent decay pass: Ebbinghaus curve applied to all memories not accessed in 7+ days
-
Priority snapshots: Compares current priorities with 30-day-old snapshot, classifies as
deliberate_pivot|gradual_drift|stable
Based on Nader, Schafe & LeDoux (2000): when memory
If re-retrieved while already labile, the window extends:
This implements a dual belief-update architecture:
| Pathway | Trigger | Behavior |
|---|---|---|
| Reconsolidation | Memory retrieved | Passive refinement: "I prefer async" |
| Contradiction detection | Explicit conflict | Evidence-weighted superseded_by link |
Reconsolidation catches gradual belief drift that hard contradiction detection would miss. No other production agent memory system implements retrieval-triggered lability windows.
Two-pass detection:
- Real-time (during extraction): Every new decision or preference is checked against existing memories. Claude Haiku identifies semantic conflicts. Old memory receives evidence-weighted strength penalty.
- Offline (during consolidation): Full audit across the memory store for subtle contradictions missed in real-time.
Decision ledger — decisions are first-class objects with:
decision_text+reasoning+domain+outcome- Explicit supersession chains: when a decision is reversed, the old one links to the new one
- The agent can query the ledger by topic or domain to surface prior decisions with their reasoning
The same query produces different results depending on context. The composite query vector blends the current message with recent conversation state:
Candidates are scored by a four-factor product:
where
| Capability | MemGPT/Letta | Mem0 | Zep | A-Mem | MemoryBank | FADEMEM | Agenternal |
|---|---|---|---|---|---|---|---|
| Memory hierarchy | 2 flat layers | Flat | 3 tiers | Flat | Flat | Flat | Episodic --> Semantic --> Schema |
| Forgetting | None | None | Staleness | Activation | Ebbinghaus | Adaptive exp. | Idempotent Ebbinghaus |
| Spacing effect | None | None | None | None | None | None | Spaced stability + spacing-scaled boost |
| Contradiction model | None | None | None | None | None | Exp. suppression | Evidence-weighted (Rescorla-Wagner) |
| Reconsolidation | None | None | None | None | None | None | 6h lability windows |
| Pattern abstraction | None | None | None | None | None | None | DBSCAN --> behavioral schemas |
| Decision tracking | None | None | None | None | None | None | Ledger with supersession chains |
| Context-sensitive retrieval | None | None | User-aware | None | None | None | Intent + recency + layer weighted |
| Offline consolidation | None | None | None | None | None | None | Scheduled daemon with centrality-weighted decay |
| Priority drift detection | None | None | None | None | None | None | Snapshot comparison + drift classification |
| Component | Technology |
|---|---|
| LLM | Claude Sonnet 4 (streaming) + Claude Haiku 4.5 (extraction, consolidation) |
| Frontend | Next.js 16, React 19, Tailwind CSS 4 |
| Backend | FastAPI, Python 3.12 |
| Database | PostgreSQL 17 + pgvector |
| Embeddings | fastembed BAAI/bge-small-en-v1.5 (384 dims, local ONNX) |
| Clustering | scikit-learn DBSCAN (cosine distance) |
| Search | DuckDuckGo (no API key) |
| Deployment | Docker Compose / Railway |
- Docker & Docker Compose
- Anthropic API key
echo "ANTHROPIC_API_KEY=your-key-here" > backend/.envdocker compose up -d --build- Chat: http://localhost:3001
- Memory: http://localhost:3001/memory
- Tasks: http://localhost:3001/tasks
- API docs: http://localhost:8000/docs
curl -X POST http://localhost:8000/api/consolidateIn a new Railway project, create three services:
| Service | How | Root directory | Port |
|---|---|---|---|
| PostgreSQL | "New" > "Database" > "PostgreSQL" | — | auto |
| backend | "New" > "GitHub Repo" > this repo | /backend |
8000 |
| frontend | "New" > "GitHub Repo" > this repo | /frontend |
3000 |
backend:
| Variable | Value |
|---|---|
ANTHROPIC_API_KEY |
Your Anthropic API key |
DATABASE_URL |
Copy from Railway PostgreSQL service (auto-converts postgresql:// to postgresql+asyncpg://) |
CORS_ORIGINS |
https://<your-frontend>.up.railway.app |
frontend:
| Variable | Value |
|---|---|
NEXT_PUBLIC_API_URL |
https://<your-backend>.up.railway.app |
Railway's PostgreSQL supports pgvector. The backend automatically runs CREATE EXTENSION IF NOT EXISTS vector on startup.
- The backend Dockerfile pre-downloads the ONNX embedding model at build time (~100MB) — no cold-start delay
- The consolidation scheduler starts automatically with the backend (every 6h)
- Health check:
GET /api/health - Currently public (no auth) — add authentication before sharing widely
agenternal/
├── docker-compose.yml
├── docs/
│ └── brain-inspired-memory-research.md # Full research document (formulas, literature review)
│
├── backend/
│ ├── main.py # FastAPI + consolidation scheduler
│ ├── config.py
│ ├── agent/
│ │ └── prompts.py # System prompts (response style, memory instructions)
│ ├── memory/
│ │ ├── archival_memory.py # Spacing-aware search + retrieval reinforcement
│ │ ├── background_agent.py # Post-turn extraction + evidence-weighted contradictions
│ │ ├── compression.py # Conversation rolling summaries
│ │ ├── consolidation.py # Sleep replay: clustering, distillation, schema synthesis
│ │ ├── core_memory.py # Always-in-context user profile (4 blocks)
│ │ ├── decisions.py # Decision ledger with supersession chains
│ │ ├── embeddings.py # Local ONNX embedding model
│ │ ├── knowledge_graph.py # Graph RAG with fuzzy entity dedup
│ │ ├── manager.py # Context-sensitive retrieval orchestration
│ │ ├── recall.py # Conversation history search
│ │ ├── reconsolidation.py # Lability windows (Nader et al. 2000)
│ │ └── scheduler.py # Consolidation background task (6h interval)
│ ├── tools/
│ │ └── agent_tools.py # 14 agent tools (memory CRUD, search, delete, insights)
│ ├── api/
│ │ ├── chat.py # SSE streaming with tool use loop
│ │ ├── memory.py # Memory health API
│ │ ├── knowledge.py # Knowledge graph API
│ │ ├── tasks.py # Task management API
│ │ └── onboarding.py # First-time setup flow
│ └── db/
│ └── models.py # 9 tables (conversations, messages, core_memory,
│ # archival_memory, entities, relationships,
│ # tasks, memory_decisions, memory_schemas,
│ # priority_snapshots)
│
└── frontend/
└── src/
├── app/
│ ├── page.tsx # Chat + sidebar + memory panel
│ ├── memory/page.tsx # Memory explorer (core, archival, graph)
│ └── tasks/page.tsx # Task manager
├── components/
│ ├── ChatWindow.tsx # Streaming chat with thinking + tool indicators
│ ├── MessageBubble.tsx # Message rendering with markdown
│ ├── MemoryPanel.tsx # Live memory activity + insights panel
│ ├── Sidebar.tsx # Conversation list
│ ├── KnowledgeGraph.tsx # Force-directed graph visualization
│ └── chat/ # Sub-components (code blocks, thinking, tools, cards)
├── lib/
│ ├── api.ts # API client + SSE streaming
│ └── context/chat-context.tsx # React context (chat state + memory events)
└── types/chat.ts
| Tool | Purpose |
|---|---|
web_search |
DuckDuckGo search for current information |
collect_info |
Interactive form cards for structured input |
core_memory_append |
Append to always-in-context memory |
core_memory_replace |
Update or remove core memory content |
graph_memory_add |
Create/update knowledge graph entities |
graph_memory_search |
Search graph with 1-2 hop traversal |
graph_memory_delete |
Remove entities and their relationships |
archival_memory_insert |
Store facts in long-term memory |
archival_memory_search |
Semantic search over archival memory |
archival_memory_delete |
Remove incorrect memories |
memory_insights |
Query abstracted behavioral patterns |
decision_search |
Search the decision ledger by topic/domain |
conversation_search |
Search past conversations by content |
conversation_search_date |
Search conversations by date range |
POST /api/chat/send— SSE streaming with tool use loopGET /api/chat/conversations— List conversationsGET /api/chat/conversations/:id/messages— Get messagesDELETE /api/chat/conversations/:id— Delete conversation
GET /api/memory/core— Core memory sectionsPUT /api/memory/core— Update core memoryGET /api/memory/archival— Archival memories (with layer, strength)GET /api/memory/search?q=— Semantic searchGET /api/memory/health— Layer stats, schemas, decisions, priority timelineGET /api/memory/labile— Count of currently labile memories
GET /api/knowledge/entities— List entitiesGET /api/knowledge/entities/:id— Entity with relationshipsGET /api/knowledge/graph— Full graph data for visualizationGET /api/knowledge/stats— Graph statistics
POST /api/consolidate— Manually trigger memory consolidationGET /api/health— Service health check
- Ebbinghaus (1885). Uber das Gedachtnis. Original forgetting curve.
- Bjork & Bjork (1992). "A new theory of disuse." Storage strength vs retrieval strength.
- McClelland, McNaughton, O'Reilly (1995). "Why there are complementary learning systems."
- Nader, Schafe & LeDoux (2000). "Fear memories require protein synthesis for reconsolidation." Nature.
- Walker et al. (2003). "Dissociable stages of memory consolidation and reconsolidation." Nature.
- Cepeda et al. (2006). "Distributed practice in verbal recall tasks." Psychological Bulletin. Meta-analysis: spacing effect.
- Karpicke & Roediger (2008). "The critical importance of retrieval for learning." Science.
- Rescorla & Wagner (1972). "A theory of Pavlovian conditioning." Prediction error in belief updating.
- Friston (2010). "The free-energy principle." Nature Reviews Neuroscience. Bayesian brain hypothesis.
- Packer et al. (2023). "MemGPT: Towards LLMs as Operating Systems." arXiv:2310.08560
- MemoryBank (2023). "Enhancing LLMs with Long-Term Memory." arXiv:2305.10250
- Zep (2025). "A Temporal Knowledge Graph Architecture for Agent Memory." arXiv:2501.13956
- FADEMEM (2026). "Biologically-Inspired Forgetting and Adaptive Memory." arXiv:2601.18642
- TiMem (2026). "Temporal-Hierarchical Memory Consolidation." arXiv:2601.02845
- TraceMem (2026). "Weaving Narrative Memory Schemata." arXiv:2602.09712
See docs/brain-inspired-memory-research.md for the full research document with LaTeX formulas, literature comparison, and novelty assessment.
MIT