Conversation
…act, subagent propagation Register a full OpenClaw Context Engine alongside the memory slot. When activated via plugins.slots.contextEngine: "nowledge-mem": - assemble() injects behavioral guidance + recalled memories via systemPromptAddition (cache-friendly system-prompt space) - afterTurn() captures threads + triage/distill every turn (not just session end) - compact() enhances compaction instructions with saved knowledge graph context so key decisions survive summarization - prepareSubagentSpawn() propagates Working Memory + recalled memories to child sessions automatically Hooks remain as backward-compatible fallback when CE is not active. Also fixes recall hook: prependContext → appendSystemContext (cache-friendly). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
New users with no existing memories got zero guidance from the plugin — buildMemoryContextBlock returned empty when both WM and search were empty, so the AI never learned about Nowledge Mem tools. Now the 4-line BEHAVIORAL_GUIDANCE constant is always injected on the first message of every thread, regardless of recall results. Also removes generated_at timestamp from injected context (gratuitous per-turn variance, no purpose). Bump to v0.6.4. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
New skill detects the current agent, verifies nmem setup, and guides native plugin installation for richer features (auto-recall, auto-capture, graph tools). All existing skills now include a footer pointing agents to check-integration and the integrations docs page. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Documents which injection methods are cache-safe (appendSystemContext, systemPromptAddition) vs cache-breaking (prependContext) to prevent future regressions. Minor formatting fix in client.js. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…guide - integrations.json: single source of truth for all 13 integrations (capabilities, transport, tool naming, thread save, install, detection) - shared/behavioral-guidance.md: unified heuristics for WM, search, autonomous save, retrieval routing, thread save honesty - docs/PLUGIN_DEVELOPMENT_GUIDE.md: rules for new plugin authors (transport, tool naming, skill alignment, capabilities checklist) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…ance
- New status skill: nmem --json status for connection diagnostics
- distill-memory: add autonomous save encouragement ("do not wait to
be asked"), structured save fields (unit-type, labels, importance),
quality bar (skip routine, importance scale)
- search-memory: add contextual signals (debugging, architecture,
implicit recall language)
- check-integration: corrected install commands for all 8 agents,
references integrations.json as canonical source
- CHANGELOG: 0.6.0 entry
- README: add status skill to list
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
All plugins now share consistent behavioral heuristics aligned with community/shared/behavioral-guidance.md: - distill-memory: "Save proactively... Do not wait to be asked" added to Claude Code, Droid, Cursor, Gemini CLI, Codex - search-memory: contextual signals added to Droid, Cursor - Bub _GUIDANCE_BASE: strengthened autonomous save language - Codex AGENTS.md + distill.md: proactive save + add-vs-update - Alma + OpenClaw: already had correct language (verified, no changes) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Registry section links to integrations.json, shared guidance, and plugin development guide as the single sources of truth - Bub plugin added to integration table (was missing) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…tries - integrations.json: npx-skills version 0.5.0 → 0.6.0 (matches CHANGELOG) - integrations.json: add Antigravity + Windsurf trajectory extractors (were in README but missing from canonical registry) - Claude Code distill-memory: remove redundant "Proactive Save" section (mandate moved to opening line; avoids duplication with "Suggestion") - Cursor search-memory: remove duplicate "ambiguous result" line from Contextual Signals (already present in Strong Triggers) - Bub plugin.py: update token budget comment ~50 → ~70 (matches actual guidance length after autonomous save strengthening) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Registry: - Claude Code version 0.7.1 → 0.7.2 (matches CHANGELOG) - Droid/Cursor install commands: prose → actual runnable commands - Antigravity/Windsurf extractors: transport http-api → cli, autoCapture true → false, threadSave plugin-capture → manual-export (they are offline extraction CLIs, not live-capture agents) README: - Add Claude Desktop and Browser Extension rows (were in registry but missing from table) - Gemini CLI install: git clone → Extensions Gallery (current path) - Cursor install: generic prose → Marketplace search Bub: - Add missing CHANGELOG 0.2.1 entry (pyproject.toml was bumped but CHANGELOG was not) Gemini CLI (nested submodule): - search-memory: rename "Strong Triggers" → "Strong Signals", add "Contextual signals" section to match shared behavioral guidance Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Tracks the 0.1.4 release: search signal alignment + proactive save. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
✅ Files skipped from review due to trivial changes (1)
📝 WalkthroughWalkthroughThis PR adds a canonical integrations registry and a plugin development guide, harmonizes shared behavioral guidance and CLI skills to proactive save/search rules, updates multiple README/docs and plugin versions, and implements a Context Engine (CE) path for the OpenClaw plugin with refactored hooks, exported helpers, and CE lifecycle logic. Changes
Sequence Diagram(s)sequenceDiagram
participant Runtime as OpenClaw Runtime
participant CE as Nowledge-Mem CE
participant State as ceState
participant Client as Memory Client
participant Store as Memory Store
Runtime->>CE: bootstrap()
CE->>State: check active flag
State-->>CE: active = true
CE->>Client: fetch Working Memory
Client->>Store: query WM
Store-->>Client: WM results
Client-->>CE: WM data cached
Runtime->>CE: assemble(messages)
CE->>CE: build systemPromptAddition (behavioral + WM + recalled)
CE->>Client: search memories (if needed)
Client->>Store: search query
Store-->>Client: recalled results
Client-->>CE: filtered results
CE-->>Runtime: append systemPromptAddition
Runtime->>CE: afterTurn()
CE->>CE: capture thread data
CE->>Client: triage/distill & save
Client->>Store: save/update
Store-->>Client: save result
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Warning Review ran into problems🔥 ProblemsGit: Failed to clone repository. Please run the Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (4)
nowledge-mem-claude-code-plugin/skills/distill-memory/SKILL.md (1)
8-10: Clarify proactive behavior by removing “suggest” framing.Line 8 says “Do not wait to be asked,” but Line 10 still frames this as “When to Suggest,” which weakens the intended autonomous-save policy.
Proposed doc tweak
-## When to Suggest (Moment Detection) +## When to Save (Moment Detection)Based on learnings, memory distillation guidance should proactively save durable decisions/learnings rather than waiting for explicit user prompts.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@nowledge-mem-claude-code-plugin/skills/distill-memory/SKILL.md` around lines 8 - 10, Update the wording to remove "suggest" framing and make the policy explicitly proactive: change the heading "When to Suggest (Moment Detection)" to "When to Save (Moment Detection)" and replace any phrasing like "Do not wait to be asked" with a clear directive such as "Save proactively when the conversation produces a decision, preference, plan, procedure, learning, or important context" so the SKILL.md text reflects autonomous-save behavior rather than suggestion.nowledge-mem-npx-skills/CHANGELOG.md (1)
5-28: Consider clarifying version release timing.Both v0.6.0 and v0.5.0 are dated 2026-03-23. If these are being released together as part of this PR, consider either:
- Combining them into a single release (v0.6.0)
- Adding a note explaining the batch release
Otherwise, the changelog entries are well-structured and accurately document the new features and alignment with
shared/behavioral-guidance.md.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@nowledge-mem-npx-skills/CHANGELOG.md` around lines 5 - 28, The changelog lists both versions v0.6.0 and v0.5.0 with the same date (2026-03-23); clarify release timing by either combining the entries into a single 0.6.0 release or add a note that these were batched together on 2026-03-23; update CHANGELOG.md so the headings "## [0.6.0] - 2026-03-23" and "## [0.5.0] - 2026-03-23" reflect the chosen approach and include a brief explanatory sentence if you keep both entries (e.g., "Batch release: v0.5.0 and v0.6.0 published together on 2026-03-23").nowledge-mem-npx-skills/skills/distill-memory/SKILL.md (1)
29-34: Structured save guidance is valuable.The importance ranges (0.8–1.0 for major decisions, 0.5–0.7 for patterns, 0.3–0.4 for minor notes) provide actionable calibration for agents. The Native Plugin footer maintains consistency with other skills.
Note: The Cursor plugin's
distill-memory/SKILL.md(per relevant snippet) uses an abbreviated format without importance ranges. Consider whether to harmonize all plugin skill docs to the same detail level for consistency, or keep platform-specific variations intentional.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@nowledge-mem-npx-skills/skills/distill-memory/SKILL.md` around lines 29 - 34, The docs add useful structured-save guidance (flags --unit-type, -l, -i with importance ranges 0.8–1.0, 0.5–0.7, 0.3–0.4) but the Cursor plugin’s distill-memory/SKILL.md uses an abbreviated format; update that SKILL.md to either mirror the same structured-save section (explicitly document --unit-type, -l, -i and the importance ranges) or add a short note explaining why the Cursor plugin intentionally omits ranges, so all plugin skill docs are consistent; reference the file "distill-memory/SKILL.md", the flags "--unit-type", "-l", "-i", and the importance ranges when making the change.nowledge-mem-openclaw-plugin/src/context-engine.js (1)
55-67:extractTextis duplicated across three files.This function exists identically in
recall.js,capture.js, and here. Consider extracting to a shared utility module (e.g.,src/utils.js) to reduce duplication.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@nowledge-mem-openclaw-plugin/src/context-engine.js` around lines 55 - 67, The function extractText is duplicated in extractText (used in context-engine.js) and also in recall.js and capture.js; factor it into a shared utility: create a new exported helper function extractText in a common module (e.g., utils.js) and replace the local implementations by importing that exported extractText where currently defined (refer to the extractText function signature and its callers in context-engine.js, recall.js, and capture.js), ensuring behavior stays identical and updating imports/exports accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@nowledge-mem-alma-plugin/CLAUDE.md`:
- Around line 128-155: The CLAUDE.md Cache Safety section references a
non-existent file postmortem/2026-03-23-system-prompt-cache-breaking-plugins.md;
fix by either (A) removing that reference from the "Cache Safety" section in
CLAUDE.md, or (B) adding the missing postmortem document with the expected
content and filename so the link resolves—search for the reference string
"postmortem/2026-03-23-system-prompt-cache-breaking-plugins.md" and update
CLAUDE.md (section "Cache Safety") or create the postmortem file accordingly.
In `@nowledge-mem-codex-prompts/AGENTS.md`:
- Line 54: Update the AGENTS.md guidance for --unit-type to include the fuller
set of recommended memory categories so agents don’t under-classify memories:
expand the example list around the existing "--unit-type" usage to explicitly
include learning, decision, fact, procedure, event, preference, plan, and
context (in addition to the existing
decision/procedure/learning/preference/event), and add a short note to favor
high-signal memories and use -l labels when they improve retrieval; keep wording
concise and replace the current partial list with the expanded set.
In `@nowledge-mem-openclaw-plugin/src/context-engine.js`:
- Around line 159-162: The module-level ceState.active boolean causes races
across multiple CE instances; change activation to reference-counted so each
createNowledgeMemContextEngineFactory increments a counter and dispose()
decrements it, only toggling the global "active" state when the count goes 0->1
or 1->0. Update ce-state.js to expose e.g.
incrementActive()/decrementActive()/isActive() (or a numeric activeCount) and
replace direct reads/writes of ceState.active in
createNowledgeMemContextEngineFactory, the CE dispose() implementation, and the
other affected areas (the block around the other create factory at the later
lines referenced) to use these increment/decrement helpers so child sessions
don’t flip the shared flag prematurely.
---
Nitpick comments:
In `@nowledge-mem-claude-code-plugin/skills/distill-memory/SKILL.md`:
- Around line 8-10: Update the wording to remove "suggest" framing and make the
policy explicitly proactive: change the heading "When to Suggest (Moment
Detection)" to "When to Save (Moment Detection)" and replace any phrasing like
"Do not wait to be asked" with a clear directive such as "Save proactively when
the conversation produces a decision, preference, plan, procedure, learning, or
important context" so the SKILL.md text reflects autonomous-save behavior rather
than suggestion.
In `@nowledge-mem-npx-skills/CHANGELOG.md`:
- Around line 5-28: The changelog lists both versions v0.6.0 and v0.5.0 with the
same date (2026-03-23); clarify release timing by either combining the entries
into a single 0.6.0 release or add a note that these were batched together on
2026-03-23; update CHANGELOG.md so the headings "## [0.6.0] - 2026-03-23" and
"## [0.5.0] - 2026-03-23" reflect the chosen approach and include a brief
explanatory sentence if you keep both entries (e.g., "Batch release: v0.5.0 and
v0.6.0 published together on 2026-03-23").
In `@nowledge-mem-npx-skills/skills/distill-memory/SKILL.md`:
- Around line 29-34: The docs add useful structured-save guidance (flags
--unit-type, -l, -i with importance ranges 0.8–1.0, 0.5–0.7, 0.3–0.4) but the
Cursor plugin’s distill-memory/SKILL.md uses an abbreviated format; update that
SKILL.md to either mirror the same structured-save section (explicitly document
--unit-type, -l, -i and the importance ranges) or add a short note explaining
why the Cursor plugin intentionally omits ranges, so all plugin skill docs are
consistent; reference the file "distill-memory/SKILL.md", the flags
"--unit-type", "-l", "-i", and the importance ranges when making the change.
In `@nowledge-mem-openclaw-plugin/src/context-engine.js`:
- Around line 55-67: The function extractText is duplicated in extractText (used
in context-engine.js) and also in recall.js and capture.js; factor it into a
shared utility: create a new exported helper function extractText in a common
module (e.g., utils.js) and replace the local implementations by importing that
exported extractText where currently defined (refer to the extractText function
signature and its callers in context-engine.js, recall.js, and capture.js),
ensuring behavior stays identical and updating imports/exports accordingly.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 11adbc27-28d6-470e-8727-5de1ceec5aea
📒 Files selected for processing (40)
README.mddocs/PLUGIN_DEVELOPMENT_GUIDE.mdintegrations.jsonnowledge-mem-alma-plugin/CHANGELOG.mdnowledge-mem-alma-plugin/CLAUDE.mdnowledge-mem-alma-plugin/README.mdnowledge-mem-alma-plugin/main.jsnowledge-mem-alma-plugin/manifest.jsonnowledge-mem-bub-plugin/CHANGELOG.mdnowledge-mem-bub-plugin/pyproject.tomlnowledge-mem-bub-plugin/src/nowledge_mem_bub/plugin.pynowledge-mem-claude-code-plugin/skills/distill-memory/SKILL.mdnowledge-mem-codex-prompts/AGENTS.mdnowledge-mem-codex-prompts/distill.mdnowledge-mem-cursor-plugin/skills/distill-memory/SKILL.mdnowledge-mem-cursor-plugin/skills/search-memory/SKILL.mdnowledge-mem-droid-plugin/skills/distill-memory/SKILL.mdnowledge-mem-droid-plugin/skills/search-memory/SKILL.mdnowledge-mem-gemini-clinowledge-mem-npx-skills/CHANGELOG.mdnowledge-mem-npx-skills/README.mdnowledge-mem-npx-skills/skills/check-integration/SKILL.mdnowledge-mem-npx-skills/skills/distill-memory/SKILL.mdnowledge-mem-npx-skills/skills/read-working-memory/SKILL.mdnowledge-mem-npx-skills/skills/save-handoff/SKILL.mdnowledge-mem-npx-skills/skills/save-thread/SKILL.mdnowledge-mem-npx-skills/skills/search-memory/SKILL.mdnowledge-mem-npx-skills/skills/status/SKILL.mdnowledge-mem-openclaw-plugin/CHANGELOG.mdnowledge-mem-openclaw-plugin/CLAUDE.mdnowledge-mem-openclaw-plugin/openclaw.plugin.jsonnowledge-mem-openclaw-plugin/package.jsonnowledge-mem-openclaw-plugin/src/ce-state.jsnowledge-mem-openclaw-plugin/src/client.jsnowledge-mem-openclaw-plugin/src/context-engine.jsnowledge-mem-openclaw-plugin/src/hooks/behavioral.jsnowledge-mem-openclaw-plugin/src/hooks/capture.jsnowledge-mem-openclaw-plugin/src/hooks/recall.jsnowledge-mem-openclaw-plugin/src/index.jsshared/behavioral-guidance.md
| export function createNowledgeMemContextEngineFactory(client, cfg, logger) { | ||
| return () => { | ||
| ceState.active = true; | ||
| logger.info("nowledge-mem: context engine activated"); |
There was a problem hiding this comment.
Race condition: shared ceState.active flag corrupted by concurrent sessions.
When OpenClaw spawns subagents (via prepareSubagentSpawn), each child session creates its own CE instance. All instances share the module-level ceState.active boolean. If a child session's dispose() runs while the parent is still active, it sets ceState.active = false globally, causing the parent's hooks (behavioral.js, recall.js, capture.js) to resume firing and duplicate the CE's work.
Consider using reference counting:
🔒 Proposed fix: reference-counted activation
In ce-state.js:
-export const ceState = { active: false };
+let _activeCount = 0;
+export const ceState = {
+ get active() { return _activeCount > 0; },
+ acquire() { _activeCount++; },
+ release() { _activeCount = Math.max(0, _activeCount - 1); },
+};In context-engine.js:
export function createNowledgeMemContextEngineFactory(client, cfg, logger) {
return () => {
- ceState.active = true;
+ ceState.acquire();
logger.info("nowledge-mem: context engine activated");
// ...
async dispose() {
- ceState.active = false;
+ ceState.release();
_sessions.clear();Also applies to: 398-403
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@nowledge-mem-openclaw-plugin/src/context-engine.js` around lines 159 - 162,
The module-level ceState.active boolean causes races across multiple CE
instances; change activation to reference-counted so each
createNowledgeMemContextEngineFactory increments a counter and dispose()
decrements it, only toggling the global "active" state when the count goes 0->1
or 1->0. Update ce-state.js to expose e.g.
incrementActive()/decrementActive()/isActive() (or a numeric activeCount) and
replace direct reads/writes of ceState.active in
createNowledgeMemContextEngineFactory, the CE dispose() implementation, and the
other affected areas (the block around the other create factory at the later
lines referenced) to use these increment/decrement helpers so child sessions
don’t flip the shared flag prematurely.
The previous list omitted fact, plan, and context, which could steer agents away from valid memory classifications. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The community repo is open source; references to postmortem files in the private parent repo are not resolvable for external contributors. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
nowledge-mem-alma-plugin/CLAUDE.md (1)
12-12:⚠️ Potential issue | 🟡 MinorVersion inconsistency between status and changelog reference.
Line 12 states the current status is "as of v0.6.3", but line 152 references a change "Removed
generated_atin 0.6.4". Either update the version in line 12 to v0.6.4, or clarify the forward-looking nature of the 0.6.4 reference.📝 Suggested fix
Option 1: Update the current version if this documentation reflects v0.6.4
-## Current Status (as of v0.6.3) +## Current Status (as of v0.6.4)Option 2: Clarify the 0.6.4 reference is forward-looking
-- However, avoid embedding per-turn variance (timestamps, random IDs) in injected content. Removed `generated_at` in 0.6.4. +- However, avoid embedding per-turn variance (timestamps, random IDs) in injected content. The `generated_at` field will be removed in v0.6.4.Also applies to: 152-152
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@nowledge-mem-alma-plugin/CLAUDE.md` at line 12, Update the version reference inconsistency: locate the "Current Status (as of v0.6.3)" heading (line with "Current Status") and either change it to "as of v0.6.4" to match the changelog entry "Removed `generated_at` in 0.6.4" (the changelog line referencing 0.6.4) or modify the changelog entry to indicate it's upcoming (e.g., prefix with "planned" or "upcoming 0.6.4") so the document consistently reflects whether 0.6.4 is released or forward-looking.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@nowledge-mem-alma-plugin/CLAUDE.md`:
- Line 12: Update the version reference inconsistency: locate the "Current
Status (as of v0.6.3)" heading (line with "Current Status") and either change it
to "as of v0.6.4" to match the changelog entry "Removed `generated_at` in 0.6.4"
(the changelog line referencing 0.6.4) or modify the changelog entry to indicate
it's upcoming (e.g., prefix with "planned" or "upcoming 0.6.4") so the document
consistently reflects whether 0.6.4 is released or forward-looking.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 24cc233b-26eb-4d27-888d-7d9e62217854
📒 Files selected for processing (3)
nowledge-mem-alma-plugin/CLAUDE.mdnowledge-mem-codex-prompts/AGENTS.mdnowledge-mem-openclaw-plugin/CLAUDE.md
✅ Files skipped from review due to trivial changes (2)
- nowledge-mem-codex-prompts/AGENTS.md
- nowledge-mem-openclaw-plugin/CLAUDE.md
Summary by CodeRabbit
New Features
Improvements
Documentation