Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
17 changes: 17 additions & 0 deletions .cursor-plugin/marketplace.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
{
"name": "nowledge-community",
"owner": {
"name": "Nowledge Labs",
"email": "hello@nowledge-labs.ai"
},
"metadata": {
"description": "Official Cursor plugins from Nowledge Labs."
},
"plugins": [
{
"name": "nowledge-mem",
"source": "./nowledge-mem-cursor-plugin",
"description": "Bring Working Memory, memory recall, and handoff summaries into Cursor with Nowledge Mem."
}
]
}
52 changes: 52 additions & 0 deletions .github/workflows/release-gemini-extension.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
name: Release Gemini Extension

on:
push:
tags:
- 'nowledge-mem-gemini-cli-extension-v*'

permissions:
contents: write

jobs:
release:
runs-on: ubuntu-latest
defaults:
run:
working-directory: nowledge-mem-gemini-cli-extension

steps:
- name: Check out repository
uses: actions/checkout@v4

- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: '20'

- name: Resolve release metadata
id: release_meta
run: |
VERSION=$(node -p "require('./package.json').version")
EXPECTED_TAG="nowledge-mem-gemini-cli-extension-v${VERSION}"
if [ "${GITHUB_REF_NAME}" != "${EXPECTED_TAG}" ]; then
echo "Tag ${GITHUB_REF_NAME} does not match package version ${VERSION}" >&2
exit 1
fi
echo "version=${VERSION}" >> "$GITHUB_OUTPUT"
echo "expected_tag=${EXPECTED_TAG}" >> "$GITHUB_OUTPUT"

- name: Validate extension package
run: npm run validate

- name: Build release archive
run: npm run package:release

- name: Create GitHub Release
uses: softprops/action-gh-release@v2
with:
name: Nowledge Mem Gemini CLI Extension v${{ steps.release_meta.outputs.version }}
body_path: nowledge-mem-gemini-cli-extension/release-notes/${{ steps.release_meta.outputs.version }}.md
files: |
nowledge-mem-gemini-cli-extension/dist/nowledge-mem-gemini-cli-extension.tar.gz
nowledge-mem-gemini-cli-extension/dist/nowledge-mem-gemini-cli-extension.tar.gz.sha256
38 changes: 38 additions & 0 deletions .github/workflows/validate-gemini-extension.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
name: Validate Gemini Extension

on:
pull_request:
paths:
- 'nowledge-mem-gemini-cli-extension/**'
- '.github/workflows/validate-gemini-extension.yml'
- '.github/workflows/release-gemini-extension.yml'
push:
branches:
- main
- dev
paths:
- 'nowledge-mem-gemini-cli-extension/**'
- '.github/workflows/validate-gemini-extension.yml'
- '.github/workflows/release-gemini-extension.yml'

jobs:
validate:
runs-on: ubuntu-latest
defaults:
run:
working-directory: nowledge-mem-gemini-cli-extension

steps:
- name: Check out repository
uses: actions/checkout@v4

- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: '20'

- name: Validate extension package
run: npm run validate

- name: Verify release packaging
run: npm run package:release
113 changes: 31 additions & 82 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@

<img src="https://github.com/user-attachments/assets/fbf6f921-ff0a-40dc-be43-8f9b0d66cb09" width="200" alt="Nowledge Community Logo">

**Support, feedback, and community hub for Nowledge Labs products**
**Community integrations for [Nowledge Mem](https://mem.nowledge.co)**

[![Discord](https://img.shields.io/badge/Discord-Join%20Community-5865F2?style=flat&logo=discord&logoColor=white)](https://nowled.ge/discord)
[![Docs](https://img.shields.io/badge/Docs-Read-orange?style=flat&logo=readthedocs&logoColor=white)](https://nowled.ge/mem-docs)
Expand All @@ -13,111 +13,60 @@

</div>

## ⚡ Nowledge Communities
## Integrations

### Nowledge Mem
Each directory is a standalone integration. Pick the one that matches your tool.

[![Get Mem](https://img.shields.io/badge/Get-Mem-00A3A3?style=flat&logo=rocket&logoColor=white)](https://mem.nowledge.co/)
| Integration | Install | What it does |
|-------------|---------|--------------|
| **[Skills](nowledge-mem-npx-skills)** | `npx skills add nowledge-co/community/nowledge-mem-npx-skills` | Reusable workflow package for Working Memory, routed recall, resumable handoffs, and distillation. Prefer native packages when your tool has one. |
| **[Claude Code Plugin](nowledge-mem-claude-code-plugin)** | `claude plugin marketplace add nowledge-co/community` then `claude plugin install nowledge-mem@nowledge-community` | Claude Code native plugin with hooks for Working Memory bootstrap, routed recall, and automatic session capture. |
| **[Gemini CLI Extension](nowledge-mem-gemini-cli-extension)** | `cd nowledge-mem-gemini-cli-extension && gemini extensions link .` | Gemini-native context, hooks, commands, and skills for Working Memory, routed recall, real thread save, and handoff summaries. |
| **[Cursor Plugin](nowledge-mem-cursor-plugin)** | Use the packaged Cursor plugin directory with Cursor's plugin workflow | Cursor-native plugin package with bundled MCP config, rules, Working Memory, routed recall, distillation, and honest `save-handoff` semantics. |
| **[Codex Prompts](nowledge-mem-codex-prompts)** | Copy `AGENTS.md` to your project | Codex-native workflow pack for Working Memory, routed recall, real session save, and distillation. |
| **[OpenClaw Plugin](nowledge-mem-openclaw-plugin)** | `openclaw plugins install @nowledge/openclaw-nowledge-mem` | Full memory lifecycle with memory tools, thread tools, automatic capture, and distillation. |
| **[Alma Plugin](nowledge-mem-alma-plugin)** | Search Nowledge in Alma official Plugin marketplace | Alma-native plugin with Working Memory, thread-aware recall, structured saves, and optional auto-capture. |
| **[Raycast Extension](nowledge-mem-raycast)** | Search Nowledge in Raycast Extension Store | Search memories from Raycast launcher. |
| **[MCP](#direct-mcp)** | For tools without a dedicated Nowledge package, use [direct MCP](#direct-mcp). | Standard memory and thread tools exposed through one shared MCP server. |

A **local-first, graph-augmented personal context manager** that preserves conversations and insights from your AI interactions. Stop opening 5 chatbot apps to find *that one conversation*. Type → found.
## Direct MCP

Mem persists entire conversation threads across all your AI tools while also distilling key insights into searchable memories. Everything connected through an intelligent knowledge graph.

**Privacy-first. Graph-native. AI-integrated.**

#### Ways to Use Mem

##### Capture Knowledge

- **🤖 MCP Integration** | Works with [Claude Code](https://github.com/nowledge-co/community/tree/main/nowledge-mem-claude-code-plugin), [Claude Desktop](https://github.com/nowledge-co/claude-dxt), [Cursor](https://mem.nowledge.co/docs/integrations), Github Copilot, Gemini CLI, Qwen Code & more, [ref config](./mcp.json)
- **🌐 Browser Extension** | One-click imports from ChatGPT, Claude, Gemini, Perplexity (Chrome, Edge)
- **📂 Thread Import** | Upload conversation files from Cursor, ChatWise, Markdown exports or Claude Code/Codex command

> [!NOTE]
> General MCP Configuration:
Add to your tool's MCP settings:

```json
{
"mcpServers": {
"nowledge-mem": {
"url": "http://localhost:14242/mcp",
"type": "streamableHttp",
"headers": {
"APP": "<MCP Client App Name here>"
}
"type": "streamableHttp"
}
}
}
```

##### Access Your Knowledge

- **⌨️ Global Launcher (⌘⇧K)** | Paste memories anywhere without leaving your workflow
- **🔍 In-App Search** | Deep exploration with semantic search, keywords, and graph navigation
- **🤝 Agent Access** | Let AI agents autonomously search and save memories during tasks

##### Advanced Features

- **🕸️ Knowledge Graph** | Auto-extract entities and relationships for semantic connections
- **🎯 Topic Clustering** | Discover natural groupings and expertise areas via graph algorithms
- **📊 Visual Exploration** | Interactive graph view to explore connections and patterns

### Open Knowledge Layer

[![GitHub](https://img.shields.io/badge/GitHub-OpenKL-181717?style=flat&logo=github&logoColor=white)](https://github.com/nowledge-co/OpenKL)

An **open-source unified protocol** enabling AI agents to access and interact with personal knowledge. **CLI-ready** and file-based for seamless integration with tools like Claude Code. Built for broader knowledge management: massive document collections, entire codebases, and external knowledge corpus with automatic chunking.
See [mcp.json](mcp.json) for the reference config.
Comment on lines +34 to +47
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "README direct-MCP snippet:"
sed -n '32,48p' README.md

echo
echo "Reference mcp.json:"
fd '^mcp\.json$' -x sed -n '1,160p' {}

Repository: nowledge-co/community

Length of output: 594


Inline README example is missing required authentication headers.

The nowledge-mem server configuration in the README (lines 34–47) omits the headers object present in mcp.json. Users copying this snippet will be missing the required APP and X-NMEM-API-Key fields, causing MCP setup to fail.

Add the headers field to the inline example:

{
  "mcpServers": {
    "nowledge-mem": {
      "url": "http://localhost:14242/mcp",
      "type": "streamableHttp",
      "headers": {
        "APP": "<MCP Client App Name here>",
        "X-NMEM-API-Key": "nmem_..."
      }
    }
  }
}

Alternatively, remove the JSON snippet and direct readers to copy from mcp.json directly.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@README.md` around lines 34 - 47, Update the inline JSON example under
mcpServers/nowledge-mem in README.md to include the required headers object so
users have the APP and X-NMEM-API-Key fields (i.e., add headers with "APP":
"<MCP Client App Name here>" and "X-NMEM-API-Key": "nmem_..."), or remove the
snippet and add a sentence pointing readers to copy the full configuration from
mcp.json; edit the README entry that defines mcpServers -> nowledge-mem to
reflect this change.


## 🚀 Quick Links
## Requirements

### Nowledge Mem Community
- [Nowledge Mem](https://mem.nowledge.co) running locally
- `nmem` CLI on your PATH: if Mem is running on the same machine, install it from **Settings > Preferences > Developer Tools > Install CLI** in the app, or use `pip install nmem-cli` for a standalone setup

- 📚 **[Documentation](https://mem.nowledge.co/docs)** | Learn how to use Nowledge Mem
- 📖 **[Blog Post](https://www.nowledge-labs.ai/blog/nowledge-mem)** | Deep dive into our vision
- 🔌 **Claude Code Plugin** | Install via marketplace:

```bash
claude plugin marketplace add nowledge-co/community
claude plugin install nowledge-mem@nowledge-community
```

- 🛠️ **npx Skills** | Install to any supported agent (Claude Code, Cursor, OpenCode, Codex, etc.):

```bash
npx skills add nowledge-co/community/nowledge-mem-npx-skills
```

- 🐾 **OpenClaw Plugin** | Install for [OpenClaw](https://openclaw.ai) agents:

```bash
openclaw plugin add @nowledge/openclaw-nowledge-mem
```

- 🧠 **Alma Plugin** | Install local plugin for [Alma](https://alma.now):

```bash
cd nowledge-mem-alma-plugin && npm install
```

- 🔍 **Raycast Extension** | Search memories from [Raycast](https://raycast.com):

```bash
cd nowledge-mem-raycast && npm install && npm run dev
```

- 🐛 **[Report Bug](https://github.com/nowledge-co/community/issues/new?template=bug_report.md)** | Found an issue? Let us know
- 💡 **[Request Feature](https://github.com/nowledge-co/community/issues/new?template=feature_request.md)** | Share your ideas
- 💌 **[Send Feedback](mailto:hello@nowledge-labs.ai)** | We'd love to hear from you
```bash
nmem status # verify Nowledge Mem is running
```

### Open Knowledge Layer Community
## Links

- 🔗 **[GitHub Repository](https://nowled.ge/kl)** | Explore the open-source project
- 💬 **[Join Discord](https://nowled.ge/discord)** | Connect with users and the team
- [Documentation](https://mem.nowledge.co/docs)
- [Blog](https://www.nowledge-labs.ai/blog/nowledge-mem)
- [Report a Bug](https://github.com/nowledge-co/community/issues/new?template=bug_report.md)
- [Request a Feature](https://github.com/nowledge-co/community/issues/new?template=feature_request.md)
- [hello@nowledge-labs.ai](mailto:hello@nowledge-labs.ai)

---

<div align="center">

**Built with ❤️ by [Nowledge Labs](https://nowledge-labs.ai)**
**Built by [Nowledge Labs](https://nowledge-labs.ai)**

</div>
78 changes: 78 additions & 0 deletions examples/conversation-markdown.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,78 @@
---
title: Python Async Patterns
source: chatgpt
date: 2025-06-15
---

## System

You are a senior Python developer. Explain concepts clearly with practical examples.

## User

I keep getting confused about when to use `asyncio.gather` vs `asyncio.create_task`. What's the difference?

## Assistant

They serve different purposes:

**`asyncio.gather(*coros)`** runs multiple coroutines concurrently and waits for all of them to complete. It returns results in the same order you passed them:

```python
async def fetch_all():
user, posts, comments = await asyncio.gather(
fetch_user(1),
fetch_posts(1),
fetch_comments(1),
)
return user, posts, comments
```

**`asyncio.create_task(coro)`** schedules a coroutine to run in the background. You get a `Task` object back and can await it later — or never:

```python
async def process():
# Fire and forget — runs in background
task = asyncio.create_task(send_analytics())

# Do other work while analytics sends
result = await compute_something()

# Optionally await if you need the result
await task
```

**Rule of thumb:**
- Use `gather` when you need all results before continuing
- Use `create_task` when you want fire-and-forget or need to start work early

## User

What about error handling? If one task in `gather` fails, do the others keep running?

## Assistant

By default, if any coroutine in `gather` raises an exception, the exception propagates immediately — but the other tasks **keep running** in the background. This is a common source of bugs.

You can change this with `return_exceptions=True`:

```python
results = await asyncio.gather(
risky_operation_1(),
risky_operation_2(),
return_exceptions=True,
)

for result in results:
if isinstance(result, Exception):
logger.error(f"Task failed: {result}")
else:
process(result)
```

For `create_task`, unhandled exceptions are logged as warnings when the task is garbage-collected. Always either `await` the task or add an explicit error handler:

```python
task = asyncio.create_task(background_work())
task.add_done_callback(handle_task_exception)
```
5 changes: 3 additions & 2 deletions mcp.json
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,9 @@
"url": "http://localhost:14242/mcp",
"type": "streamableHttp",
"headers": {
"APP": "<MCP Client App Name here>"
"APP": "<MCP Client App Name here>",
"X-NMEM-API-Key": "nmem_..."
Comment on lines +7 to +8
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Don’t put API-key slots in a tracked MCP config.

Even with a placeholder value, this nudges users to paste a live nmem_... token into a committed file. Keep the checked-in example header-free and document local-only auth setup separately.

🔐 Suggested change
       "headers": {
-        "APP": "<MCP Client App Name here>",
-        "X-NMEM-API-Key": "nmem_..."
+        "APP": "<MCP Client App Name here>"
       }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
"APP": "<MCP Client App Name here>",
"X-NMEM-API-Key": "nmem_..."
"APP": "<MCP Client App Name here>"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@mcp.json` around lines 7 - 8, Remove the "X-NMEM-API-Key" placeholder from
the committed mcp.json and avoid suggesting an API header value in the example;
keep the example config minimal and header-free by retaining only non-sensitive
keys like "APP" (with a neutral placeholder) and remove any "nmem_..."
placeholder token. Update documentation or a separate local-only README to
explain how contributors should provide their personal API key (e.g., via
environment variables or an untracked local mcp.local.json) and reference the
"X-NMEM-API-Key" header name there instead of embedding it in the tracked file.

}
}
}
}
}
4 changes: 3 additions & 1 deletion nowledge-mem-alma-plugin/alma-skill-nowledge-mem.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ Use Nowledge Mem as the primary external memory system.
2. `nowledge_mem_search` for focused retrieval with filters.
3. `nowledge_mem_show` for full detail on selected memory IDs — includes `sourceThreadId`.
4. `nowledge_mem_thread_search` / `nowledge_mem_thread_show` for conversation history.
5. When a memory has a `sourceThreadId`, use `nowledge_mem_thread_show` to read the full source conversation.
5. When a memory has a `sourceThreadId`, use `nowledge_mem_thread_show` or `nmem --json t show` progressively: start with the first page and fetch more only if the current page is not enough.

For writes:

Expand All @@ -42,8 +42,10 @@ If neither plugin tools nor Bash are available, state the exact blocker once and
## Query Heuristics

- Trigger retrieval when user asks about prior decisions, historical context, previous threads, “what did we do before,” or asks to continue prior work.
- Start with memory/query search for distilled knowledge, and use thread search when the user is really asking about conversation history.
- Start normal mode first; use deep mode only when normal retrieval misses likely context.
- Prefer narrower queries over broad vague queries.
- Avoid dumping a huge thread when one page of messages is enough to answer.

## Write Heuristics

Expand Down
8 changes: 8 additions & 0 deletions nowledge-mem-bench/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
__pycache__/
*.pyc
.venv/
dist/
*.egg-info/
data/
results/
.env
Loading