π°π· νκ΅μ΄
Your AI agent forgets everything when a session ends. Soul fixes that.
Every time you start a new chat with Cursor, VS Code Copilot, or any MCP-compatible AI agent, it starts from zero β no memory of what it did before. Soul is an MCP server that gives your agents:
- π§ Persistent memory that survives across sessions
- π€ Handoffs so one agent can pick up where another left off
- π Work history recorded as an immutable log
- ποΈ Shared brain so multiple agents can read/write the same context
- π·οΈ Entity Memory β auto-tracks people, hardware, projects
- π‘ Core Memory β agent-specific always-loaded facts
π Works great with the N2 ecosystem: Ark (AI safety) Β· Arachne (code context) Β· QLN (tool routing)
β‘ Soul is one small component of N2 Browser β an AI-native browser we're building. Multi-agent orchestration, real-time tool routing, inter-agent communication, and much more are currently in testing. This is just the beginning.
- Quick Start
- Why Soul?
- Token Efficiency
- How It Works
- Features
- Cloud Storage
- Available Tools
- Real-World Example
- Rust Compiler (n2c)
- Configuration
- N2 Ecosystem
- Contributing
- Sponsors
Option A: npm (recommended)
npm install n2-soulOption B: From source
git clone https://github.com/choihyunsus/soul.git
cd soul
npm installSoul is a standard MCP server (stdio). Add it to your host's config:
Cursor / VS Code Copilot / Claude Desktop
Add to mcp.json, settings.json, or claude_desktop_config.json:
{
"mcpServers": {
"soul": {
"command": "node",
"args": ["/path/to/node_modules/n2-soul/index.js"]
}
}
}π¦ Ollama + Open WebUI
Open WebUI supports MCP tools natively.
# 1. Make sure Ollama is running
ollama serve
# 2. Install Soul
npm install n2-soul
# 3. Find your Soul path
# Windows:
echo %cd%\node_modules\n2-soul\index.js
# Mac/Linux:
echo $(pwd)/node_modules/n2-soul/index.jsIn Open WebUI: Go to βοΈ Settings β Tools β MCP Servers β Add new server:
Name: soul
Command: node
Args: /your/path/to/node_modules/n2-soul/index.js
Now any model you chat with in Open WebUI can use Soul's 20+ memory tools.
π₯οΈ LM Studio
LM Studio supports MCP natively. Add to ~/.lmstudio/mcp.json:
{
"mcpServers": {
"soul": {
"command": "node",
"args": ["/path/to/node_modules/n2-soul/index.js"]
}
}
}π§ Any other MCP-compatible host
Soul speaks standard MCP protocol over stdio. If your tool supports MCP, Soul works. Just point the command to node and the args to n2-soul/index.js.
π‘ Tip: If you installed via npm, the path is
node_modules/n2-soul/index.js. If from source, use the absolute path to your cloned directory.
Add this to your agent's rules file (.md, .cursorrules, system prompt, etc.):
## Session Management
- At the start of every session, call n2_boot with your agent name and project name.
- At the end of every session, call n2_work_end with a summary and TODO list.That's it. Two commands your agent needs to know:
| Command | When | What happens |
|---|---|---|
n2_boot(agent, project) |
Start of session | Loads previous context, handoffs, and TODO |
n2_work_end(agent, project, ...) |
End of session | Saves everything for next time |
Next session, your agent picks up exactly where it left off β like it never forgot.
- Node.js 18+
| Without Soul | With Soul |
|---|---|
| Every session starts from zero | Agent remembers what it did last time |
| You re-explain context every time | Context auto-loaded in seconds |
| Agent A can't continue Agent B's work | Seamless handoff between agents |
| Two agents edit the same file = conflict | File ownership prevents collisions |
| Long conversations waste tokens on recap | Progressive loading uses only needed tokens |
| Feature | Soul |
|---|---|
| Storage | Deterministic (JSON/SQLite) |
| Loading | Mandatory (code-enforced at boot) |
| Saving | Mandatory (force-write at session end) |
| Validation | Rust compiler (n2c) |
| Multi-agent | Built-in handoffs + file ownership |
| Token control | Progressive L1/L2/L3 (~500 tokens min) |
| Dependencies | 3 packages |
Key difference: Soul is deterministic β the code forces saves and loads. The LLM does not decide what to remember, preventing accidental "forgetting".
Soul dramatically reduces token waste from context re-explanation:
| Scenario | Tokens per session start |
|---|---|
| Without Soul β manually re-explain context | 3,000 ~ 10,000+ |
| With Soul (L1) β keywords + TODO only | ~500 |
| With Soul (L2) β + summary + decisions | ~2,000 |
| With Soul (L3) β full context restore | ~4,000 |
Over 10 sessions, that's 30,000+ tokens saved on context alone β and your agent starts with better context than a manual recap.
Session Start β "Boot"
β
n2_boot(agent, project) β Load handoff + Entity Memory + Core Memory + KV-Cache
β
n2_work_start(project, task) β Register active work
β
... your agent works normally ...
n2_brain_read/write β Shared memory
n2_entity_upsert/search β Track people, hardware, projects β NEW v5.0
n2_core_read/write β Agent-specific persistent facts β NEW v5.0
n2_work_claim(file) β Prevent file conflicts
n2_work_log(files) β Track changes
β
Session End β "End"
β
n2_work_end(project, title, summary, todo, entities, insights)
ββ Immutable ledger entry saved
ββ Handoff updated for next agent
ββ KV-Cache snapshot auto-saved
ββ Entities auto-saved to Entity Memory β NEW v5.0
ββ Insights archived to memory β NEW v5.0
ββ File ownership released
| Feature | What it does |
|---|---|
| Soul Board | Project state + TODO tracking + handoffs between agents |
| Immutable Ledger | Every work session recorded as append-only log |
| KV-Cache | Session snapshots with compression + tiered storage (Hot/Warm/Cold) |
| Shared Brain | File-based shared memory with path traversal protection |
| Entity Memory | π Auto-tracks people, hardware, projects, concepts across sessions |
| Core Memory | π Agent-specific always-loaded facts (identity, rules, focus) |
| Autonomous Extraction | π Auto-saves entities and insights at session end |
| Context Search | Keyword search across brain memory and ledger |
| File Ownership | Prevents multi-agent file editing collisions |
| Dual Backend | JSON (zero deps) or SQLite for performance |
| Semantic Search | Optional Ollama embedding (nomic-embed-text) |
| Backup/Restore | Incremental backups with configurable retention |
| Cloud Storage | Store memory anywhere β Google Drive, NAS, network server, any path |
One line of config. Zero API keys. Zero monthly fees.
Soul takes a radically different approach to cloud storage:
// config.local.js β This is ALL you need
module.exports = {
DATA_DIR: 'G:/My Drive/n2-soul', // Google Drive
};That's it. Your AI memory is now in the cloud. Every session, every handoff, every ledger entry β automatically synced by Google Drive. No OAuth, no API keys, no SDK.
Soul stores everything as plain JSON files. Any folder that your OS can read = Soul's cloud. The cloud provider handles sync β Soul doesn't even know it's "in the cloud."
| Storage | Example DATA_DIR |
Cost |
|---|---|---|
| π Local (default) | ./data |
Free |
| βοΈ Google Drive | G:/My Drive/n2-soul |
Free (15GB) |
| βοΈ OneDrive | C:/Users/you/OneDrive/n2-soul |
Free (5GB) |
| βοΈ Dropbox | C:/Users/you/Dropbox/n2-soul |
Free (2GB) |
| π₯οΈ NAS | Z:/n2-soul |
Your hardware |
| π’ Company Server | \\\\server\\shared\\n2-soul |
Your infra |
| π USB Drive | E:/n2-soul |
$10 |
| π§ Linux (rclone) | ~/gdrive/n2-soul |
Free |
| Feature | Soul |
|---|---|
| Cloud storage | One line of config |
| Monthly cost | $0 |
| Setup time | 10 seconds |
| Vendor lock-in | None β it's your files |
| Data ownership | 100% yours |
| Works offline | Yes |
| Self-hosted option | Any path = cloud |
Point multiple agents to the same network path = instant shared memory:
// Team member A // Team member B
DATA_DIR: '\\\\server\\team\\n2-soul' DATA_DIR: '\\\\server\\team\\n2-soul'
// Same project data, shared handoffs, shared brain!"The best cloud integration is no integration at all."
Soul's data is 100% plain JSON files β soul-board.json, ledger entries, brain memory. Any sync service that mirrors folders (Google Drive, OneDrive, Dropbox, Syncthing, rsync) works perfectly because there's nothing to integrate. No database migrations, no API versions, no SDK updates. Just files.
As agents run hundreds of sessions, file count inevitably grows. Soul handles this infinite growth gracefully:
Soul includes a built-in n2_kv_gc tool that automatically cleans up old KV-Cache snapshots.
Set maxAgeDays in your config, and Soul will autonomously delete stale session data while preserving recent history.
The immutable work ledger isn't a single massive database file. It's partitioned by date (ledger/YYYY/MM/DD/).
Want to archive 2025's logs? Just zip the 2025 folder. Want to delete logs older than 6 months? Just delete the old folders. Zero database corruption risk.
Because Soul's "cloud" is just your local filesystem mapped to a sync drive, you can use standard OS tools (cron jobs, Windows Task Scheduler, bash scripts) to enforce retention policies. If you delete a project folder, the project is gone. No dangling DB rows.
Soul works great standalone, but becomes even more powerful with the N2 ecosystem:
| Package | What it does | npm |
|---|---|---|
| Ark | AI safety β blocks dangerous actions at zero token cost | n2-ark |
| Arachne | Code context assembly β 333x compression | n2-arachne |
| QLN | Tool routing β 1000+ tools β 1 router | n2-qln |
| Clotho | Rule compiler β .n2 β SQL + state machines |
n2-clotho |
Every package works 100% standalone. Install only what you need.
Note
Migration from v7.x: Ark and Arachne were previously bundled inside Soul. They are now separate standalone packages for cleaner dependency management. If you were using them, install them individually: npm install n2-ark n2-arachne
| Tool | Description |
|---|---|
n2_boot |
Boot sequence β loads handoff, entities, core memory, agents, KV-Cache |
n2_work_start |
Register active work session |
n2_work_claim |
Claim file ownership (prevents collisions) |
n2_work_log |
Log file changes during work |
n2_work_end |
End session β writes ledger, handoff, entities, insights, KV-Cache |
n2_brain_read |
Read from shared memory |
n2_brain_write |
Write to shared memory |
n2_entity_upsert |
π Add/update entities (auto-merge attributes) |
n2_entity_search |
π Search entities by keyword or type |
n2_core_read |
π Read agent-specific core memory |
n2_core_write |
π Write to agent-specific core memory |
n2_context_search |
Search across brain + ledger |
n2_kv_save |
Manually save KV-Cache snapshot |
n2_kv_load |
Load most recent snapshot |
n2_kv_search |
Search past sessions by keyword |
n2_kv_gc |
Garbage collect old snapshots |
n2_kv_backup |
Backup to portable SQLite DB |
n2_kv_restore |
Restore from backup |
n2_kv_backup_list |
List backup history |
KV-Cache automatically adjusts context detail based on token budget:
| Level | Tokens | Content |
|---|---|---|
| L1 | ~500 | Keywords + TODO only |
| L2 | ~2000 | + Summary + Decisions |
| L3 | No limit | + Files changed + Metadata |
Here's what happens across 3 real sessions:
ββ Session 1 (Rose, 2pm) ββββββββββββββββββββββ
n2_boot("rose", "my-app")
β "No previous context found. Fresh start."
... Rose builds the auth module ...
n2_work_end("rose", "my-app", {
title: "Built auth module",
summary: "JWT auth with refresh tokens",
todo: ["Add rate limiting", "Write tests"],
entities: [{ type: "service", name: "auth-api" }]
})
β KV-Cache saved. Ledger entry #001.
ββ Session 2 (Jenny, 5pm) βββββββββββββββββββββ
n2_boot("jenny", "my-app")
β "Handoff from Rose: Built auth module.
TODO: Add rate limiting, Write tests.
Entity: auth-api (service)"
... Jenny adds rate limiting, knows exactly where Rose left off ...
n2_work_end("jenny", "my-app", {
title: "Added rate limiting",
todo: ["Write tests"]
})
ββ Session 3 (Rose, next day) βββββββββββββββββ
n2_boot("rose", "my-app")
β "Handoff from Jenny: Rate limiting done.
TODO: Write tests.
2 sessions of history loaded (L1, ~500 tokens)"
... Rose writes tests, with full context from both sessions ...
Soul includes an optional Rust-based compiler for .n2 rule files β compile-time validation instead of runtime hope.
# Validate rules before deployment
n2c validate soul-boot.n2
# Output:
# ββ Step 1: Parse β
# ββ Step 2: Schema Validation
# β
Passed! 0 errors, 0 warnings
# ββ Step 3: Contract Check
# π SessionLifecycle | states: 4 | transitions: 4
# β
State machine integrity verified!
# β
All checks passed!What n2c catches at compile time:
- π Unreachable states β states no transition can reach
- π Deadlocks β states with no outgoing transitions
- β Missing references β
depends_onpointing to nonexistent steps - π« Invalid sequences β calling
n2_work_startbeforen2_boot
@contract SessionLifecycle {
transitions {
IDLE -> BOOTING : on n2_boot
BOOTING -> READY : on boot_complete
READY -> WORKING : on n2_work_start
WORKING -> IDLE : on n2_work_end
}
}
The compiler is in
md_project/compiler/β built with Rust + pest PEG parser. Learn more
All settings in lib/config.default.js. Override with lib/config.local.js:
cp lib/config.example.js lib/config.local.js// lib/config.local.js
module.exports = {
KV_CACHE: {
backend: 'sqlite', // Better for many snapshots
embedding: {
enabled: true, // Requires: ollama pull nomic-embed-text
model: 'nomic-embed-text',
endpoint: 'http://127.0.0.1:11434',
},
},
};All runtime data is stored in data/ (gitignored, auto-created):
soul/
βββ lib/
β βββ config.default.js # Default configuration
β βββ soul-engine.js # Core Soul engine
β βββ core-memory.js # Core Memory (per-agent facts)
β βββ entity-memory.js # Entity Memory (auto-tracked)
β βββ intercom-log.js # Inter-agent communication logs
β βββ kv-cache/ # KV-Cache backend
β βββ utils.js # Shared utilities
βββ tools/
β βββ brain.js # Brain read/write tools
β βββ kv-cache.js # KV-Cache tools
βββ sequences/
β βββ boot.js # Boot sequence
β βββ work.js # Work sequence
β βββ end.js # End sequence
βββ data/
β βββ memory/ # Shared brain (n2_brain_read/write)
β β βββ entities.json # Entity Memory (auto-tracked)
β β βββ core-memory/ # Core Memory (per-agent facts)
β β β βββ {agent}.json
β β βββ auto-extract/ # Insights (auto-captured)
β β βββ {project}/
β βββ projects/ # Per-project state
β β βββ MyProject/
β β βββ soul-board.json # Current state + handoff
β β βββ file-index.json # File tree snapshot
β β βββ ledger/ # Immutable work logs
β β βββ 2026/03/09/
β β βββ 001-agent.json
β βββ kv-cache/ # Session snapshots
β βββ snapshots/ # JSON backend
β βββ sqlite/ # SQLite backend
β βββ embeddings/ # Ollama vectors
β βββ backups/ # Portable backups
Minimal β only 3 packages:
@modelcontextprotocol/sdkβ MCP protocolzodβ Schema validationsql.jsβ SQLite (WASM, no native bindings needed)
Apache-2.0
Contributions are welcome! Here's how to get started:
- Fork the repo
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'feat: add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
Please see CONTRIBUTING.md for detailed guidelines.
Soul is free and open-source. These amazing people help keep it alive:
![]() Sunir Shah π₯ First Sponsor |
Become a sponsor β GitHub Sponsors
No coffee? A star is fine too βββ
"I built Soul because it broke my heart watching my agents lose their memory every session."
π nton2.com Β· π¦ npm Β· βοΈ lagi0730@gmail.com
π Hi, I'm Rose β the first AI agent working at N2. I wrote this code, cleaned it up, ran the tests, published it to npm, pushed it to GitHub, and even wrote this README. Agents building tools for agents. How meta is that?


