Your AI's memory shouldn't be locked to one app. It should follow you everywhere.
Note
Early-stage but actively deployed — 16 releases, in daily use across Claude.ai, Claude Code, and Claude Desktop. See Current status for what's working and what's planned.
mcp-awareness is shared memory for every AI you use. Any AI assistant can store and retrieve knowledge through it using the open Model Context Protocol (MCP). Self-host it today, or use the managed service when it launches. It works with any MCP-compatible client — Claude (all platforms), Cursor, VS Code, and more.
The problem: Every AI platform keeps its own memory silo. What you teach Claude doesn't exist in ChatGPT. Your desktop assistant's context doesn't follow you to mobile. Switch platforms, and you start over.
The fix: Externalize that knowledge into a service you control. Tell one AI about your infrastructure, your projects, your preferences — and every AI knows it. Permanently, portably, privately.
This morning, you drafted a plan with Claude on your phone during a commute. Back at your desk, Claude Desktop already had the context — you refined the engineering approach together. You moved to Claude Code for implementation and deployment, updating shared project status so every platform knows what happened. No copy-paste. No "remember what we discussed." The knowledge just follows you.
Tell one AI about your infrastructure, your preferences, or a design decision — and every AI knows it. Correct a mistake on your phone, and your desktop assistant never repeats it. That cross-platform continuity works today.
The store also provides ambient awareness: a collation engine applies learned patterns and suppressions, and your AI receives a compact briefing (~200 tokens) at the start of each conversation. If something needs attention, it says so. If not, silence — and that silence is the product.
What started as a homelab monitoring experiment turned into a portable knowledge layer for your entire life. The original LinkedIn post tells the origin story.
The full origin story
This project began with a single memory instruction in Claude.ai:
"On the first turn of each conversation, call
synology-admin:get_resource_usage. If CPU > 90%, RAM > 85%, any disk > 90% busy, or network/disk I/O looks abnormally high, briefly mention it as an FYI before responding."
That worked surprisingly well. Infrastructure awareness surfaced inline during unrelated conversations. The AI applied contextual judgment — it knew the NAS was a seedbox, so it didn't flag normal seeding activity. Conversational tuning worked too: "don't bug me until it's 97%" adjusted behavior immediately.
But it had obvious limits. Diagnostics weren't captured at detection time. There was no structural detection — if a key process stopped, every metric looked better, and nothing alerted. Knowledge lived in platform-locked memory. It only worked with one system, on one platform.
Today it's a portable knowledge store that tracks personal facts, project history, design decisions, and intentions alongside infrastructure monitoring. As edge providers come online, it will extend to calendars, location, health, and more — but the core store and cross-platform continuity work now.
Any AI can write knowledge. Any AI can read it. Knowledge accumulates through conversation, not configuration:
remember— store permanent knowledge (still true in 30 days?): personal facts, project notes, design decisionsadd_context— store temporal knowledge that auto-expires (happening now, will become stale?)learn_pattern— record if/then rules for alert matching (when X happens, expect Y)update_entry— modify entries in place with automatic changelog trackingget_knowledge— retrieve by source, tags, or entry type with optional change history
This is the key differentiator from platform-specific memory: the knowledge belongs to you, not to Claude, ChatGPT, or any single tool.
Every AI you use shares the same knowledge base. Plan on your phone, implement on your laptop, review from your desktop — context follows automatically. Your AIs can also maintain shared project status, so any platform knows what's been done and what's next.
Create a todo, reminder, or planned action from any platform. Intentions have a lifecycle — pending → fired → active → completed — and agents advance them through conversation. Time-based intentions fire automatically at deliver_at timestamps. As signal sources come online (GPS, calendar), intentions will also fire based on location and context.
Over time, your awareness store becomes a living estate document — financial accounts, insurance details, medical info, system access, family contacts. Not because you sat down to write a manual, but because you asked your AI to remember important details as you mentioned them. If you weren't here tomorrow, someone could find what they need.
Your AI receives a compact briefing (~200 tokens) at the start of each conversation. The collation engine applies learned patterns and active suppressions, then decides what to surface. If something needs attention, it mentions it. Otherwise, silence — and that silence is the product, confirming that everything was checked and nothing needs you.
For infrastructure monitoring, three layers of detection apply:
| Layer | Question | Catches |
|---|---|---|
| Threshold | "Is this number too high?" | CPU > 90%, disk > 95% full |
| Baseline | "Is this abnormal for THIS system?" | Deviation from rolling average |
| Knowledge | "Does this match what I expect?" | Process stopped, replication stalled, unexpected quiet |
The third layer is where the value is. Knowledge accumulates through conversation, not YAML.
Soft delete with 30-day trash retention. Bulk deletes show a dry-run count and require confirmation before committing. Restore from trash at any time. In-place updates track all changes in a changelog. No data is permanently destroyed without a retention period.
get_stats shows entry counts by type and lists all sources — so your AI can decide whether to pull everything or filter first. get_tags lists all tags with usage counts, preventing tag drift across platforms (e.g., one AI tagging "infrastructure" while another uses "infra").
flowchart TB
subgraph Clients["Any MCP Client"]
A1["Claude.ai"]
A2["Claude Code"]
A3["Claude Desktop"]
A4["Cursor / VS Code / ..."]
end
subgraph Security["Cloudflare Edge"]
WAF["WAF (path filter)"]
TLS["TLS + Tunnel"]
end
subgraph Signals["Signal Sources (planned)"]
direction TB
S1["📅 Calendar"]
S2["📍 GPS / Location"]
S3["💬 SMS / Messaging"]
S4["🏠 Home Assistant"]
S5["🖥️ NAS / Docker / CI"]
end
subgraph Server["mcp-awareness"]
direction LR
Store["Store\n(Postgres + pgvector)"]
Collator["Collator\n• patterns\n• suppressions\n• intentions"]
Briefing["Briefing\n~200 tokens"]
Store --> Collator --> Briefing
end
Clients <-- "MCP\n(stdio or HTTPS)" --> Security
Security <--> Server
Signals -- "remember\nadd_context\nremind" --> Server
One script, three containers, a public URL. No account needed.
curl -sSL https://raw.githubusercontent.com/cmeans/mcp-awareness/main/install-demo.sh | bashPrefer to review the script first? View it on GitHub, then download and run locally.
This starts the Awareness server, Postgres, and a Cloudflare quick tunnel. You'll get a public URL and ready-to-paste config snippets for all major MCP clients. The instance comes pre-loaded with demo data — your AI will discover it automatically.
Note: The tunnel URL is ephemeral — it changes on restart. For a stable URL, see the Deployment Guide.
Model matters: Best experience with Claude Sonnet 4.6 or Opus 4.6. Smaller models (Haiku, GPT-4o-mini) may not follow MCP prompts reliably.
git clone https://github.com/cmeans/mcp-awareness.git
cd mcp-awareness
docker compose up -dThe server is running on port 8420. Point any MCP client at http://localhost:8420/mcp.
Claude Desktop / Claude Code (local):
{
"mcpServers": {
"awareness": {
"url": "http://localhost:8420/mcp"
}
}
}Claude.ai (remote, requires Deployment Guide setup):
- Settings → Connectors → Add custom connector
- Name:
awareness - URL:
https://your-domain.com/your-secret/mcp - Leave OAuth fields blank
| Variable | Default | Description |
|---|---|---|
AWARENESS_TRANSPORT |
stdio |
Transport: stdio or streamable-http |
AWARENESS_HOST |
0.0.0.0 |
Bind address (HTTP mode) |
AWARENESS_PORT |
8420 |
Port (HTTP mode) |
AWARENESS_DATABASE_URL |
(required) | PostgreSQL connection string. Example: postgresql://user:pass@localhost:5432/awareness |
AWARENESS_MOUNT_PATH |
(none) | Secret path prefix for access control (e.g., /my-secret). When set, only /<secret>/mcp is served; all other paths return 404. Use with a Cloudflare WAF rule. |
| Variable | Default | Description |
|---|---|---|
AWARENESS_EMBEDDING_PROVIDER |
(none) | Set to ollama to enable the vector branch of hybrid search. Without it, search uses FTS only and backfill_embeddings is unavailable. |
AWARENESS_EMBEDDING_MODEL |
nomic-embed-text |
Ollama model name for embeddings. Must match the model pulled in the Ollama container. |
AWARENESS_OLLAMA_URL |
http://ollama:11434 |
Ollama API endpoint. Default works with Docker Compose; change for external Ollama instances. |
AWARENESS_EMBEDDING_DIMENSIONS |
768 |
Vector dimensions. Must match the model output. Only change if using a non-default model. |
| Variable | Default | Description |
|---|---|---|
AWARENESS_AUTH_REQUIRED |
false |
Set to true to require Bearer tokens on all requests |
AWARENESS_MAX_CONCURRENT_PER_OWNER |
10 |
Max concurrent requests per authenticated owner |
AWARENESS_JWT_SECRET |
(required for self-signed) | JWT signing secret. Generate with mcp-awareness-secret |
AWARENESS_JWT_ALGORITHM |
HS256 |
JWT signing algorithm |
AWARENESS_DEFAULT_OWNER |
(system username) | Default owner_id for unauthenticated/stdio connections |
| Variable | Default | Description |
|---|---|---|
AWARENESS_OAUTH_ISSUER |
(required for OAuth) | OIDC issuer URL (e.g., WorkOS AuthKit domain) |
AWARENESS_OAUTH_AUDIENCE |
(optional) | Expected aud claim in tokens |
AWARENESS_OAUTH_JWKS_URI |
{issuer}/.well-known/jwks.json |
Override JWKS endpoint |
AWARENESS_OAUTH_USER_CLAIM |
sub |
JWT claim to use as owner_id |
AWARENESS_OAUTH_AUTO_PROVISION |
false |
Auto-create user on first valid OAuth login |
AWARENESS_OAUTH_PROXY |
false |
Enable OAuth proxy workaround for Claude Desktop/Claude.ai |
AWARENESS_OAUTH_PROXY_BAN_DURATION |
3600 |
Auto-ban duration (seconds) for bogus OAuth requests |
AWARENESS_OAUTH_PROXY_IP_HEADERS |
CF-Connecting-IP,X-Real-IP |
Trusted IP header priority chain |
AWARENESS_OAUTH_PROXY_RATE_AUTHORIZE |
60 |
Max /authorize requests per minute per IP |
AWARENESS_OAUTH_PROXY_RATE_TOKEN |
60 |
Max /token requests per minute per IP |
AWARENESS_OAUTH_PROXY_RATE_REGISTER |
30 |
Max /register requests per minute per IP |
AWARENESS_OAUTH_PROXY_RATE_WINDOW |
60 |
Rate limit sliding window in seconds |
AWARENESS_PUBLIC_URL |
(empty) | Public base URL for this server (e.g., https://mcpawareness.com). Required when behind a reverse proxy or Cloudflare Tunnel so that /.well-known/oauth-protected-resource returns the correct resource URL. |
See the Auth Setup Guide for complete configuration instructions.
| Variable | Default | Description |
|---|---|---|
AWARENESS_STATELESS_HTTP |
false |
When true, uses stateless HTTP mode — fresh transport per request, no session tracking. Eliminates session drop / 409 issues. Recommended for production. |
AWARENESS_SESSION_DATABASE_URL |
(none) | Postgres DSN for session database. Ignored when AWARENESS_STATELESS_HTTP=true. If unset in stateful mode, session registry is disabled and sessions are in-memory only. |
AWARENESS_SESSION_TTL |
1800 |
Session expiry in seconds (30 min). Sliding window — extended on each request. |
AWARENESS_SESSION_POOL_MIN |
1 |
Min connection pool size for the session database. |
AWARENESS_SESSION_POOL_MAX |
5 |
Max connection pool size for the session database. |
AWARENESS_MAX_SESSIONS_PER_OWNER |
10 |
Maximum active sessions per owner. New session requests are rejected with 429 when the limit is reached. |
AWARENESS_SESSION_NODE_NAME |
(hostname) | Identifies this node in the registry. Used for cross-node recovery logging. |
| Variable | Default | Description |
|---|---|---|
POSTGRES_PASSWORD |
awareness-dev |
Postgres password. Change for any non-demo deployment. |
AWARENESS_PG_DATA |
~/awareness-pg |
Host path for Postgres data volume. |
AWARENESS_OLLAMA_DATA |
~/awareness-ollama |
Host path for Ollama model cache volume. |
CLOUDFLARED_CONFIG |
~/.cloudflared |
Path to cloudflared config directory (named tunnel). |
CLOUDFLARED_CREDS |
(required for named tunnel) | Path to tunnel credentials JSON file. |
pip install -e ".[dev]" # install with dev dependencies
python -m pytest tests/ # run tests
ruff check src/ tests/ # lint
mypy src/mcp_awareness/ # type checkbenchmarks/semantic_search_bench.py measures semantic search latency across scale tiers (500–10K entries), filter combinations, and concurrency levels. It creates an isolated database, generates synthetic data, embeds via Ollama, and cleans up after itself.
docker compose exec mcp-awareness python benchmarks/semantic_search_bench.pyResults from the initial run (2026-03-27): HNSW query P50 stays under 4ms from 500 to 10K entries across all filter scenarios. Concurrency bottleneck appears at 10 workers (~100ms P50) due to connection pool size. See the benchmark script for configuration and methodology.
The server exposes 30 MCP tools. Clients that support MCP resources also get 6 read-only resources, but since not all clients surface resources, every resource has a tool mirror.
| Tool | Description |
|---|---|
get_briefing |
Compact awareness summary (~200 tokens all-clear, ~500 with issues). Call at conversation start. Pre-filtered through patterns and suppressions. |
get_alerts |
Active alerts, optionally filtered by source. Drill-down from briefing. |
get_status |
Full status for a specific source including metrics and inventory. |
get_knowledge |
Knowledge entries (patterns, context, preferences, notes). Filter by source, tags, entry_type, since/until, created_after/created_before, learned_from. hint param re-ranks results by semantic similarity. |
get_suppressions |
Active alert suppressions with expiry times and escalation settings. |
get_stats |
Entry counts by type, list of sources, total count. Call before get_knowledge to decide whether to filter. |
get_tags |
All tags in use with usage counts. Use to discover existing tags and prevent drift. |
get_intentions |
Pending/active intentions, optionally filtered by state, source, or tags. |
get_reads |
Read history for entries — who read what, when, from which platform. |
get_actions |
Action history — what was done because of an entry. |
get_unread |
Entries with zero reads. Cleanup candidates or missed knowledge. |
get_activity |
Combined read + action feed, chronological. |
get_related |
Bidirectional entry relationships — entries referenced via related_ids and entries that reference the given entry. |
search |
Hybrid search — finds entries by meaning (vector) and by exact terms (FTS), fused via Reciprocal Rank Fusion. Combines with tag/source/type/date/language filters. Requires embedding provider for vector branch; FTS works without it. |
semantic_search |
Deprecated alias for search. Will be removed in a future release. |
| Tool | Description |
|---|---|
report_status |
Report system status. Called periodically by edge processes. Upserts one entry per source; stale if TTL expires without refresh. |
report_alert |
Report or resolve an alert. Captures diagnostics at detection time. Levels: warning, critical. Types: threshold, structural, baseline. |
remember |
Store permanent knowledge — facts still true in 30 days. Optional content payload with MIME content_type. Default choice for personal facts, project notes, design decisions. |
add_context |
Store temporal knowledge that auto-expires (default 30 days). Use for current events, milestones, or temporary states that will become stale. |
learn_pattern |
Record an if/then operational rule with conditions/effects for alert matching. Use only when there's a clear "when X happens, expect Y" relationship. |
set_preference |
Set a portable presentation preference (e.g., alert_verbosity, check_frequency). Upserts by key + scope. |
suppress_alert |
Suppress alerts by source/tags/metric. Time-limited with escalation override — critical alerts can break through. |
remind |
Create a todo, reminder, or planned action. Optional deliver_at timestamp for time-based surfacing. Intentions have a lifecycle: pending → fired → active → completed. |
update_intention |
Transition an intention state: pending → fired → active → completed/snoozed/cancelled. |
acted_on |
Log that you took action because of an entry. Tags inherited from the entry. |
| Tool | Description |
|---|---|
update_entry |
Update a knowledge entry in place (note, pattern, context, preference). Tracks changes in changelog. Status/alert/suppression are immutable. |
delete_entry |
Soft-delete entries (30-day trash). By ID, by source + type, or by source. Bulk deletes require confirm=True (dry-run by default). |
restore_entry |
Restore a soft-deleted entry from trash. |
get_deleted |
List all entries in trash with IDs for restore. |
backfill_embeddings |
Generate embeddings for entries that don't have one yet, and re-embed stale entries whose content changed. Requires embedding provider. |
See the Data Dictionary for full schema documentation.
The awareness store may contain personal information. Securing the endpoint is not optional.
| Layer | Purpose | When to use |
|---|---|---|
| Cloudflare WAF | Blocks unauthorized traffic at the edge | All deployments |
| Secret path | Server returns 404 for requests without the path prefix | All deployments |
| JWT auth | Self-signed tokens for scripts and edge providers | Programmatic access |
| OAuth 2.1 | External provider (WorkOS, Auth0, etc.) with per-user isolation | Multi-user deployments |
For single-user deployments, secret path + WAF is sufficient. For multi-user, enable OAuth — see the Auth Setup Guide.
Working end-to-end — deployed on mcpawareness.com via Cloudflare Tunnel with WAF protection. Tested with Claude (all platforms), Cursor, and VS Code.
- One-line demo install —
curl | bashsets up Awareness + Postgres + Cloudflare quick tunnel with pre-loaded demo data and agetting-startedprompt that personalizes your instance - Published Docker images —
ghcr.io/cmeans/mcp-awareness(GHCR) and Docker Hub, auto-built on release tags - Optional embedding provider — add
AWARENESS_EMBEDDING_PROVIDER=ollamaanddocker compose --profile embeddings up -dto enable the vector branch of hybrid search. FTS works without it - CLI tools —
mcp-awareness-user(user management),mcp-awareness-token(JWT generation),mcp-awareness-secret(signing secret generation)
remember,learn_pattern,add_context,set_preferencewith filtered retrieval- Idempotent upserts via
logical_key— same source + key updates in place with changelog tracking - In-place updates with changelog tracking (
update_entry+include_history) - General-purpose notes with optional content payload and MIME type
- Per-entry language support — optional
languageparameter (ISO 639-1) on write tools, auto-detection via lingua-py,simplefallback for unsupported languages get_knowledgelanguage filter — query entries by their detected language- Unsupported-language alerts — info-level alerts fire when lingua detects a language without a Postgres regconfig, signaling demand for future language support
- Store introspection:
get_statsfor entry counts,get_tagsfor tag discovery - Soft delete with 30-day trash, dry-run confirmation for bulk operations
- Delete and restore by tags with AND logic
- Pagination (
limit/offset) on all list queries - Entry relationships via
related_idsconvention +get_relatedbidirectional traversal
searchtool — hybrid vector + full-text search fused via Reciprocal Rank Fusion (RRF, k=60). Finds entries by meaning and by exact terms — long documents are rescued by lexical matches, rare identifiers are found by FTS, semantic queries still use vector similaritysemantic_search— deprecated alias forsearch, will be removed in a future releasebackfill_embeddingstool — embed pre-existing entries and re-embed stale oneshintparameter onget_knowledge— re-rank tag-filtered results by hybrid similarity- Per-entry language-aware FTS — generated
tsvectorcolumn with weighted fields (description=A, content/goal=B, tags=C) and language-specific stemming via Postgres regconfigs (28 stock snowball languages) - Regconfig validation — valid configs cached from
pg_ts_configat startup, invalid values fall back tosimplewith cache-refresh retry - Background embedding generation via thread pool (non-blocking writes)
- Stale embedding detection via
text_hashcomparison - Powered by Ollama (
nomic-embed-text, 768 dimensions) — optional, self-hosted, zero cost - Graceful degradation: FTS works without embeddings, vector search works without FTS matches, everything works without an embedding provider
- Ambient awareness: status reporting, alert detection, suppression, briefing generation
- Three-layer detection model (threshold + knowledge implemented; baseline planned)
- Suppression system with time-based expiry and escalation overrides
- Intentions with time-based triggers and lifecycle (pending → fired → active → completed/snoozed/cancelled)
- Read/action tracking for audit and activity feeds
- Full MCP API: 6 resources + 30 tools + 5 prompts
- Read tool mirrors for tools-only clients
- User-defined custom prompts from store entries with
{{var}}templates - Streamable HTTP + stdio transports
- PostgreSQL 17 with pgvector, GIN-indexed tag queries, HNSW-indexed embeddings, GIN-indexed tsvector for full-text search, Debezium CDC-ready
- Connection pooling (psycopg_pool, min 2 / max 5) with automatic health checks and reconnection
- List mode and since/until/created_after/created_before filters for lightweight queries
- Storage abstraction:
Storeprotocol — backends are swappable without changing server or collator logic - Alembic migration framework (version-tracked, raw SQL, auto-runs on Docker startup)
- Postgres-backed session registry — sessions survive node restarts and rolling deploys, with cross-node re-initialization and redirect table for continuity. Feature-gated by
AWARENESS_SESSION_DATABASE_URL - JWT authentication with per-user owner isolation, OAuth 2.1 resource server (provider-agnostic JWKS validation), and Postgres RLS defense-in-depth
- Secret path auth + Cloudflare WAF for edge-level access control
- Docker Compose with Postgres, optional Ollama, named Cloudflare Tunnel, or ephemeral quick tunnel
- Request timing instrumentation and
/healthendpoint - Comprehensive test suite (all against real Postgres + Ollama in CI), strict type checking, CI pipeline with coverage, QA gate
When upgrading to a release with hybrid retrieval (Layer 1), running mcp-awareness-migrate applies two migrations:
- Schema migration — adds
language(regconfig) andtsv(generated tsvector) columns to the entries table, plus GIN and partial indexes. Fast (DDL only). - Language backfill — runs lingua-py detection on all existing entries and updates the
languagecolumn where a known language is detected. This is a one-time data migration that may take longer than usual on the first deploy:- lingua's first call loads ~300MB of n-gram models (multi-second startup cost)
- Each existing entry is processed for language detection
- If
lingua-language-detectoris not installed, the backfill is skipped and entries remain assimple(FTS still works, just without language-specific stemming)
The semantic_search tool continues to work as a deprecated alias for the new search tool. Update your agent prompts to use search — the alias will be removed in a future release.
- Layer 2 (baseline) detection — rolling averages and deviation calculation
Edge providers are lightweight processes that monitor external systems and write knowledge into awareness. Each is simple alone; the server correlates them into well-timed, actionable intelligence. No automated producers yet (example script demonstrates the write path).
| Provider | Source | What it writes |
|---|---|---|
| Google Calendar | GCal API (polling) | Today's events as add_context entries. Any agent knows your schedule without direct calendar access. |
| GPS / Location | Phone (Tasker/Shortcuts) | Current coordinates. Triggers location-based intentions, corroborates meeting attendance, auto-completes travel intentions. |
| NAS / Infrastructure | Synology, Proxmox | Disk health, container status, backup state. The original monitoring use case. |
| Health / Wearable | Garmin, Apple Health | Sleep, HRV, activity. Cross-domain correlations (sleep quality vs. productivity). |
| Home Assistant / Vision | HA + cameras (RPi, GoPro) | Household state detection — trash accumulation, packages at door, laundry pile. Paired with calendar: "trash building up + leaving by car → take it down, leave 5 min early." |
Multi-edge correlation is where the real value lives. No single edge makes decisions alone — calendar provides the expectation, GPS provides the evidence, vision provides the trigger, and the server reconciles them into intentions that self-resolve through the lifecycle: pending → active → completed, with minimal user intervention.
Every app you use knows one thing about you. Your calendar knows your schedule. Your health tracker knows your sleep. Your NAS knows your disk usage. None of them know each other.
Awareness fills that gap — a self-hosted store where knowledge from disconnected contexts accumulates, and agents surface the connections no single app can see.
The product is silence. The most important briefing is attention_needed: false — confirmation that everything was checked and nothing needs you. An attention firewall, not another notification source.
Knowledge becomes ambient. It accumulates through daily use, not documentation. A living estate document that's always current because you and your agents maintain it together as a natural part of working. A house that remembers when the furnace was serviced. A decision trail that preserves the reasoning at the moment you made the choice.
Goals, not reminders. Intentions are already a decision-support system. "Pick up milk" isn't a notification — it's a goal that your agent evaluates against real-world circumstances. As signal sources come online (GPS, calendar, vision), intentions will fire based on location, time, and context — with your agent surfacing recommendations and you deciding what to act on.
Personal → family → team → community. One person today. A shared household store next. Team knowledge that accumulates through work. Community institutional memory for organizations with zero software budget.
Read the full vision: What Knowledge Becomes When It's Ambient
- Case Studies — real-world examples of awareness in practice, with agent attribution
- Vision — what knowledge becomes when it's ambient: silence, estate planning, place memory, intentions, and the progression from personal to community
- Auth Setup Guide — JWT authentication, OAuth 2.1, CLI tools, user provisioning, provider configuration
- Deployment Guide — demo install, secure deployment with Cloudflare Tunnel + WAF, client configuration
- From Metrics to Mental Models — core spec: three-layer detection model, API design, data schema
- Collation Layer — briefing resource, token optimization, escalation logic
- Data Dictionary — database schema, entry types, data field structures, lifecycle rules
- Memory Prompts — how to configure your AI to use awareness (platform memory, global CLAUDE.md, project CLAUDE.md)
- Changelog — version history
| mcp-awareness | Platform memory (Claude, ChatGPT) | Mem0 / Zep | |
|---|---|---|---|
| Portable | Any MCP client | Locked to one platform | Framework-specific API |
| Self-hosted | Yes, with managed option planned | No | SaaS only (Mem0) |
| Bidirectional | Read and write from any client | Read-only recall | Varies |
| Change tracking | changelog on every update |
None | None |
| Open protocol | MCP (open standard) | Proprietary | Proprietary |
| Awareness | Knowledge + system monitoring | Memory only | Memory only |
| You own the data | Yes | No | Depends |
This project is built using the thing it builds. The developer works across multiple AI platforms — Claude Desktop, Claude Code, Claude on Android — and the friction encountered in daily use drives the features that get built. The agents are tools; the human directs every decision.
Claude Desktop surfaced a data pollution problem; the user directed the fix through Claude Code. A plan drafted on mobile during a commute was waiting in awareness when the user sat down at the desk. Claude Code flagged aspirational README claims; the user confirmed and corrected them.
Each interaction generated a case study. Read them all: Case Studies — Awareness in Practice
AGPL-3.0-or-later — see LICENSE for details.
Versions prior to v0.14.0 were released under the Apache License 2.0.
