fix: harden sync recovery and reviewer contracts#1
Conversation
Reviewer's GuideThis PR hardens command outcome classification, durable sync/continuity recovery behavior, and startup/continuity compilation under tight line budgets, while enriching recovery/audit JSON contracts and updating docs/reviewer guidance accordingly. Sequence diagram for sync rollout processing with recovery markerssequenceDiagram
participant CLI
participant SyncService
participant MemoryStore
participant RecoveryMatcher as matchesSyncRecoveryRecord
participant AuditBuilder as buildMemorySyncAuditEntry
CLI->>SyncService: processRollout(rolloutPath, evidence, force)
SyncService->>MemoryStore: getProcessedRolloutIdentity(rolloutPath, evidence)
MemoryStore-->>SyncService: processedIdentity
SyncService->>MemoryStore: readSyncRecoveryRecord()
MemoryStore-->>SyncService: existingRecoveryRecord or null
SyncService->>RecoveryMatcher: matchesSyncRecoveryRecord(existingRecoveryRecord, identity)
RecoveryMatcher-->>SyncService: isRecovery boolean
alt rollout already processed and not force
SyncService->>MemoryStore: hasProcessedRollout(processedIdentity)
MemoryStore-->>SyncService: true
alt isRecovery is true
SyncService->>MemoryStore: clearSyncRecoveryRecordBestEffort(identity)
MemoryStore-->>SyncService: void
end
SyncService->>AuditBuilder: buildMemorySyncAuditEntry(status skipped, skipReason already_processed, isRecovery isRecovery, operations undefined)
AuditBuilder-->>SyncService: MemorySyncAuditEntry
SyncService->>MemoryStore: appendSyncAuditEntry(entry)
MemoryStore-->>SyncService: void
SyncService-->>CLI: SyncResult status skipped
else rollout not yet processed or forced
SyncService->>MemoryStore: hasProcessedRollout(processedIdentity)
MemoryStore-->>SyncService: false
SyncService->>MemoryStore: writeSyncRecoveryRecord(record)
MemoryStore-->>SyncService: void
SyncService->>MemoryStore: markRolloutProcessed(processedIdentity)
MemoryStore-->>SyncService: void
SyncService->>AuditBuilder: buildMemorySyncAuditEntry(status success or failure, isRecovery isRecovery, operations appliedOperations)
AuditBuilder-->>SyncService: MemorySyncAuditEntry
SyncService->>MemoryStore: appendSyncAuditEntry(entry)
MemoryStore-->>SyncService: void
SyncService->>MemoryStore: clearSyncRecoveryRecordBestEffort(identity)
MemoryStore-->>SyncService: void
SyncService-->>CLI: SyncResult status success or failure
end
Class diagram for updated sync recovery and audit typesclassDiagram
class SyncService {
- project Project
- store MemoryStore
- configuredExtractorName string
- sessionSource SessionSource
+ processRollout(rolloutPath string, evidence SyncEvidence, force boolean) Promise~SyncResult~
+ getProcessedRolloutIdentity(rolloutPath string, evidence SyncEvidence) Promise~ProcessedRolloutIdentity~
+ clearSyncRecoveryRecordBestEffort(identity SyncRecoveryIdentity) Promise~void~
}
class MemoryStore {
+ paths SyncStorePaths
+ getSyncState() Promise~SyncState~
+ hasProcessedRollout(identity ProcessedRolloutIdentity) Promise~boolean~
+ markRolloutProcessed(identity ProcessedRolloutIdentity) Promise~void~
+ readSyncRecoveryRecord() Promise~SyncRecoveryRecord or null~
+ writeSyncRecoveryRecord(record SyncRecoveryRecord) Promise~void~
+ clearSyncRecoveryRecord() Promise~void~
+ appendSyncAuditEntry(entry MemorySyncAuditEntry) Promise~void~
}
class SyncRecoveryRecord {
+ projectId string
+ worktreeId string
+ rolloutPath string
+ sessionId string
}
class SyncRecoveryIdentity {
+ projectId string
+ worktreeId string
+ rolloutPath string
+ sessionId string
}
class ProcessedRolloutIdentity {
+ projectId string
+ worktreeId string
+ rolloutPath string
+ sessionId string
+ sizeBytes number
+ mtimeMs number
}
class MemorySyncAuditEntry {
+ appliedAt string
+ projectId string
+ worktreeId string
+ rolloutPath string
+ sessionId string
+ sessionSource string
+ status MemorySyncAuditStatus
+ skipReason MemorySyncAuditSkipReason
+ isRecovery boolean
+ appliedCount number
+ scopesTouched MemoryScope[]
+ resultSummary string
+ operations MemoryOperation[]
}
class BuildMemorySyncAuditEntryOptions {
+ project Project
+ rolloutPath string
+ processedIdentity ProcessedRolloutIdentity
+ status MemorySyncAuditStatus
+ sessionSource string
+ appliedAt string
+ sessionId string
+ skipReason MemorySyncAuditSkipReason
+ isRecovery boolean
+ operations MemoryOperation[]
}
class MemorySyncAuditStatus {
}
class MemorySyncAuditSkipReason {
}
class MemoryScope {
}
class MemoryOperation {
}
class Project {
+ projectId string
+ worktreeId string
}
class SessionSource {
+ name string
}
class SyncEvidence {
+ sessionId string
}
class SyncStorePaths {
+ stateFile string
}
class SyncState {
}
class normalizeSyncState {
+ normalizeSyncState(value SyncState or null) SyncState
}
SyncService --> MemoryStore : uses
SyncService --> Project : uses
SyncService --> SessionSource : uses
SyncService --> ProcessedRolloutIdentity : builds
SyncService --> SyncRecoveryIdentity : clears recovery
SyncService --> MemorySyncAuditEntry : logs
MemoryStore --> SyncRecoveryRecord : reads_writes
MemoryStore --> ProcessedRolloutIdentity : stores
MemoryStore --> SyncState : returns
MemoryStore --> SyncStorePaths : owns
SyncRecoveryRecord <.. SyncRecoveryIdentity : matched_by
ProcessedRolloutIdentity <.. SyncRecoveryRecord : recovery_uses_subset
MemorySyncAuditEntry <.. BuildMemorySyncAuditEntryOptions : built_from
MemorySyncAuditEntry --> MemoryScope : touches
MemorySyncAuditEntry --> MemoryOperation : includes
normalizeSyncState --> SyncState : returns
MemoryStore ..> normalizeSyncState : calls
Project o-- ProcessedRolloutIdentity
Project o-- SyncRecoveryRecord
Project o-- MemorySyncAuditEntry
Class diagram for session continuity state merging changesclassDiagram
class SessionContinuityState {
+ updatedAt string
+ status string
+ sourceSessionId string
+ goal string
+ confirmedWorking string[]
+ triedAndFailed string[]
+ notYetTried string[]
+ incompleteNext string[]
+ filesDecisionsEnvironment string[]
}
class SessionContinuityLayerSummary {
+ goal string
+ confirmedWorking string[]
+ triedAndFailed string[]
+ notYetTried string[]
+ incompleteNext string[]
+ filesDecisionsEnvironment string[]
}
class sanitizeSessionContinuityLayerSummary {
+ sanitizeSessionContinuityLayerSummary(summary SessionContinuityLayerSummary) SessionContinuityLayerSummary
}
class sanitizeList {
+ sanitizeList(items string[], maxItems number, maxChars number) string[]
}
class sanitizeFailureList {
+ sanitizeFailureList(items string[]) string[]
}
class applySessionContinuityLayerSummary {
+ applySessionContinuityLayerSummary(base SessionContinuityState, summary SessionContinuityLayerSummary, sourceSessionId string) SessionContinuityState
}
class buildLayerSummary {
+ buildLayerSummary(existing SessionContinuityLayerSummary, next SessionContinuityLayerSummary) SessionContinuityLayerSummary
}
class heuristicSummary {
+ heuristicSummary(existingProject SessionContinuityLayerSummary, existingLocal SessionContinuityLayerSummary) SessionContinuityEvidenceBuckets
}
class SessionContinuityEvidenceBuckets {
+ project SessionContinuityLayerSummary
+ projectLocal SessionContinuityLayerSummary
}
applySessionContinuityLayerSummary --> SessionContinuityState : returns
applySessionContinuityLayerSummary --> SessionContinuityLayerSummary : uses
applySessionContinuityLayerSummary ..> sanitizeSessionContinuityLayerSummary : calls
applySessionContinuityLayerSummary ..> sanitizeList : merges_lists
applySessionContinuityLayerSummary ..> sanitizeFailureList : merges_failures
buildLayerSummary --> SessionContinuityLayerSummary : returns
buildLayerSummary ..> sanitizeSessionContinuityLayerSummary : uses_existing
buildLayerSummary ..> SessionContinuityLayerSummary : next_input
heuristicSummary --> SessionContinuityEvidenceBuckets : returns
heuristicSummary ..> buildLayerSummary : builds_layers
SessionContinuityEvidenceBuckets --> SessionContinuityLayerSummary : contains
note for applySessionContinuityLayerSummary "Now prefers sanitized goal from summary, concatenates new and base lists, and sanitizes within limits"
Flow diagram for command outcome classification precedenceflowchart TD
A[Start classifyCommandOutcome] --> B[toolCall.output exists?]
B -- no --> C[Return unknown]
B -- yes --> D[Check explicit failure exit code pattern]
D -- match --> E[Return failure]
D -- no match --> F[Check explicit success exit code pattern]
F -- match --> G[Return success]
F -- no match --> H[Check generic failure pattern]
H -- match --> I[Return failure]
H -- no match --> J[Check generic success pattern]
J -- match --> K[Return success]
J -- no match --> L[Return unknown]
subgraph Patterns
D
F
H
J
end
Flow diagram for startup and continuity compilation with line budgetsflowchart TD
subgraph SharedHelper
X1["appendWithinBudget(lines, blockLines, maxLines, minimumLines)"] --> X2[Remaining lines >= minimumLines?]
X2 -- no --> X3[Return 0]
X2 -- yes --> X4[Append each line until maxLines]
X4 --> X5[Return appended count]
end
subgraph StartupMemory
A[Start compileStartupMemory] --> A1[lines = empty]
A1 --> A2[Append preamble via appendWithinBudget]
A2 --> A3[For each scope project local, project, global]
A3 --> A4[Build scopeBlock with heading, file path, quoted contents]
A4 --> A5[Append scopeBlock via appendWithinBudget with minimumLines 4]
A5 -- appended 0 --> A6[Break scopes loop]
A5 -- appended >= 4 --> A7[Track sourceFile]
A7 --> A3
A6 --> A8[Collect topic refs]
A3 --> A8
A8 --> A9[Has any topic ref?]
A9 -- no --> A14[Finalize text and lineCount]
A9 -- yes --> A10[Take firstTopicRef]
A10 --> A11[Build topicHeaderBlock with header, guidance, first topic]
A11 --> A12[Append via appendWithinBudget with minimumLines 3]
A12 -- appended >= 3 --> A13[Push firstTopicRef, append remaining topic refs one per line via appendWithinBudget]
A12 -- appended < 3 --> A14
A13 --> A14[Finalize text and lineCount]
end
subgraph SessionContinuity
B[Start compileSessionContinuity] --> B1[lines = empty]
B1 --> B2[Append preamble via appendWithinBudget]
B2 --> B3[For each source file]
B3 --> B4[Append - Source line via appendWithinBudget]
B4 -- appended 0 --> B5[Break file loop]
B4 -- appended > 0 --> B3
B5 --> B6[If any source files, append blank line via appendWithinBudget]
B6 --> B7[For each section title and items]
B7 --> B8[Build block: heading, quoted items, blank line]
B8 --> B9[Append block via appendWithinBudget with minimumLines 2]
B9 -- appended 0 --> B10[Break sections loop]
B9 -- appended > 0 --> B7
B10 --> B11[Join lines, trim, compute lineCount]
end
File-Level Changes
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
📝 WalkthroughWalkthroughThis PR introduces recovery marker tracking for sync operations, implements budget-aware memory compilation with graceful degradation, adds corrupted state handling, enhances command classification with explicit exit-code detection, and applies Chinese language safeguards to continuity extraction. Comprehensive test coverage validates recovery flows, state resilience, and budget constraints. Changes
Sequence Diagram(s)sequenceDiagram
participant Client as Client/CLI
participant Sync as SyncService
participant Recovery as Recovery Records
participant Store as MemoryStore
participant Audit as Audit Entries
Client->>Sync: initiate sync with rollout
Sync->>Recovery: check existing recovery record<br/>matchesSyncRecoveryRecord()
Recovery-->>Sync: isRecovery = true/false
alt isRecovery && already-processed
Sync->>Recovery: clear recovery marker (best-effort)
Recovery-->>Sync: cleared
end
Sync->>Store: read sync state
Store->>Store: guard with try/catch
alt JSON parse fails
Store-->>Sync: return empty state
else success
Store-->>Sync: return parsed state
end
alt already-processed
Sync->>Audit: build skip entry with isRecovery flag
Audit-->>Sync: tagged audit entry
else apply operations
Sync->>Audit: apply ops & build entry with isRecovery flag
Audit-->>Sync: tagged audit entry
end
Sync-->>Client: return result with recovery context
sequenceDiagram
participant App as Application
participant Compiler as Memory Compiler
participant Budget as Budget Manager
participant Output as Output Buffer
App->>Compiler: compile memory with budget X
Compiler->>Budget: appendWithinBudget(preamble, maxLines)
Budget->>Output: add lines if within budget
Budget-->>Compiler: return lines added
loop for each memory block/section
Compiler->>Budget: appendWithinBudget(block, maxLines, minimumLines)
alt lines added > 0
Budget->>Output: append block
Budget-->>Compiler: return count
else budget exhausted
Compiler->>Compiler: break loop
end
end
Compiler-->>App: return compiled memory<br/>respecting budget constraints
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
📝 Coding Plan
Comment Tip Migrating from UI to YAML configuration.Use the |
There was a problem hiding this comment.
No issues found across 27 files
Since this is your first cubic review, here's how it works:
- cubic automatically reviews your code and comments on bugs and improvements
- Teach cubic by replying to its comments. cubic learns from your replies and gets better over time
- Add one-off context when rerunning by tagging
@cubic-dev-aiwith guidance or docs links (includingllms.txt) - Ask questions if you need clarification on any suggestion
There was a problem hiding this comment.
Hey - I've found 7 issues, and left some high level feedback:
- The
appendWithinBudgethelper is duplicated in bothsession-continuity.tsandstartup-memory.ts; consider centralizing this in a shared utility to avoid divergence in future tweaks to the budget behavior. - In
compileStartupMemory, thescopeTopicRefs.length > 0guard already guarantees a first topic ref, so theif (!firstTopicRef)early-return branch is effectively dead code and could be simplified away to make the topic section logic easier to follow.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- The `appendWithinBudget` helper is duplicated in both `session-continuity.ts` and `startup-memory.ts`; consider centralizing this in a shared utility to avoid divergence in future tweaks to the budget behavior.
- In `compileStartupMemory`, the `scopeTopicRefs.length > 0` guard already guarantees a first topic ref, so the `if (!firstTopicRef)` early-return branch is effectively dead code and could be simplified away to make the topic section logic easier to follow.
## Individual Comments
### Comment 1
<location path="src/lib/domain/startup-memory.ts" line_range="90-95" />
<code_context>
).flat();
if (scopeTopicRefs.length > 0) {
- appendBlock([
+ const [firstTopicRef, ...remainingTopicRefs] = scopeTopicRefs;
+ if (!firstTopicRef) {
+ const finalText = lines.join("\n").trimEnd();
+ const finalLines = finalText ? finalText.split("\n") : [];
+ return {
+ text: `${finalText}\n`,
+ lineCount: finalLines.length,
</code_context>
<issue_to_address>
**suggestion:** The `!firstTopicRef` branch after checking `scopeTopicRefs.length > 0` appears unreachable.
Within the `if (scopeTopicRefs.length > 0)` block you destructure `[firstTopicRef, ...remainingTopicRefs]`, so `firstTopicRef` is guaranteed to be defined. The subsequent `if (!firstTopicRef)` early return is therefore dead code. Consider removing that branch or the outer length check to simplify the control flow and avoid duplicate finalization logic.
</issue_to_address>
### Comment 2
<location path="src/lib/extractor/command-utils.ts" line_range="7-11" />
<code_context>
const commandFailurePattern =
- /(Process exited with code [1-9]\d*|\bexit(?:ed)? code [1-9]\d*\b|\b(?:error|errors|failed|failure|exception|traceback|assertionerror|not ok|ELIFECYCLE)\b|\bFAIL\b|command not found|No such file or directory)/iu;
+ /(Process exited with code [1-9]\d*|\bexit(?:ed)? code [1-9]\d*\b|\b[1-9]\d*\s+errors?\b|\b(?:error|failed|failure|exception|traceback|assertionerror|not ok|ELIFECYCLE)\b|\bFAIL\b|command not found|No such file or directory)/iu;
+const explicitSuccessExitCodePattern = /(Process exited with code 0|\bexit(?:ed)? code 0\b)/iu;
+const explicitFailureExitCodePattern = /(Process exited with code [1-9]\d*|\bexit(?:ed)? code [1-9]\d*\b)/iu;
</code_context>
<issue_to_address>
**issue (bug_risk):** Ordering of explicit success vs. failure heuristics may misclassify mixed outputs.
Given the current order (explicit failure exit codes → explicit success exit codes → `commandFailurePattern` → `commandSuccessPattern`), an output like `"exit code 0, but 1 error"` would be treated as success because `explicitSuccessExitCodePattern` matches before the `\b[1-9]\d*\s+errors?\b` part of `commandFailurePattern`. If such mixed outputs are realistic, consider checking failure patterns (or a specific guard for this combination) before treating `exit code 0` as definitive success.
</issue_to_address>
### Comment 3
<location path="test/command-utils.test.ts" line_range="83-92" />
<code_context>
).toBe(true);
});
+ it("prefers failure when output contains both pass and fail markers", () => {
+ const toolCall = {
+ name: "exec_command",
+ arguments: JSON.stringify({ cmd: "pnpm test" }),
+ output: ["PASS src/a.test.ts", "FAIL src/b.test.ts", "Process exited with code 1"].join("\n")
+ };
+
+ expect(commandSucceeded(toolCall)).toBe(false);
+ expect(commandFailed(toolCall)).toBe(true);
+ });
+
+ it("prefers explicit success exit codes over incidental fail words in stdout", () => {
+ const toolCall = {
+ name: "exec_command",
+ arguments: JSON.stringify({ cmd: "pnpm docs:lint" }),
+ output: ["Process exited with code 0", "Output:", "Use PASS/FAIL reporting in docs."].join("\n")
+ };
+
+ expect(commandSucceeded(toolCall)).toBe(true);
+ expect(commandFailed(toolCall)).toBe(false);
+ });
+
</code_context>
<issue_to_address>
**suggestion (testing):** Consider adding a test for explicit success exit codes combined with generic error wording
Current tests cover precedence between PASS/FAIL markers and exit codes, including the ambiguous "PASS/FAIL" wording. One remaining gap is when stdout has an explicit success exit code (0) but also a generic error word like "error" or "failed" outside an exit-code line. Since the failure pattern still matches `error|failed|failure`, an extra case such as `"Process exited with code 0\nSome error text here"` would help ensure `explicitSuccessExitCodePattern` continues to override incidental error wording as intended.
</issue_to_address>
### Comment 4
<location path="test/sync-service.test.ts" line_range="578-587" />
<code_context>
+ it("clears a matching recovery marker when an already-processed rollout is retried", async () => {
</code_context>
<issue_to_address>
**suggestion (testing):** Add a complementary test for non-matching recovery markers to ensure they are not treated as recovery
You already test the positive recovery-marker cases well. To fully cover the contract, we also need the negative case: when a recovery record exists but does *not* match the current rollout/session (e.g., different `sessionId` or `rolloutPath`), `syncRollout` should leave that record intact and `isRecovery` should remain false. Please add a test that writes a non-matching recovery record, runs `syncRollout`, and asserts that (1) the recovery record is still present and (2) the resulting audit entry has `isRecovery !== true` to lock in this behavior.
Suggested implementation:
```typescript
const result = await service.syncRollout(rolloutPath, true);
expect(result.skipped).toBe(false);
expect(await service.memoryStore.readSyncRecoveryRecord()).toBeNull();
});
it("does not treat non-matching recovery markers as recovery and leaves them intact", async () => {
const projectDir = await tempDir("cam-sync-non-matching-recovery-project-");
const memoryRoot = await tempDir("cam-sync-non-matching-recovery-memory-");
const rolloutPath = path.join(projectDir, "rollout.jsonl");
await fs.writeFile(rolloutPath, rolloutFixture(projectDir, "session-skip-recovery"), "utf8");
const service = new SyncService(
detectProjectContext(projectDir),
baseConfig(memoryRoot),
path.resolve("schemas/memory-operations.schema.json")
);
// write a recovery record that does NOT match the upcoming rollout/session
await service.memoryStore.writeSyncRecoveryRecord({
sessionId: "some-other-session-id",
rolloutPath: "/different/project/rollout.jsonl",
createdAt: new Date().toISOString(),
});
const result = await service.syncRollout(rolloutPath, true);
// rollout should proceed normally (not skipped, not treated as recovery)
expect(result.skipped).toBe(false);
// the non-matching recovery record should remain
const recoveryRecord = await service.memoryStore.readSyncRecoveryRecord();
expect(recoveryRecord).not.toBeNull();
expect(recoveryRecord?.sessionId).toBe("some-other-session-id");
// and the resulting audit entry should not be marked as recovery
const auditEntries = await service.memoryStore.readAuditEntries();
expect(auditEntries[0]?.isRecovery).not.toBe(true);
});
it("clears a matching recovery marker when an already-processed rollout is retried", async () => {
const projectDir = await tempDir("cam-sync-skip-recovery-project-");
const memoryRoot = await tempDir("cam-sync-skip-recovery-memory-");
const rolloutPath = path.join(projectDir, "rollout.jsonl");
await fs.writeFile(rolloutPath, rolloutFixture(projectDir, "session-skip-recovery"), "utf8");
const service = new SyncService(
detectProjectContext(projectDir),
baseConfig(memoryRoot),
path.resolve("schemas/memory-operations.schema.json")
);
```
The new test assumes the following helpers/APIs exist and match your existing tests:
1. `service.memoryStore.writeSyncRecoveryRecord({ sessionId, rolloutPath, createdAt })` – mirror the shape used in your "clears a matching recovery marker" test; adjust field names or arguments if your recovery record type differs.
2. `service.memoryStore.readAuditEntries()` returning an array of audit entries with an `isRecovery` boolean flag; if your API is named differently (e.g. `readAuditLog`, `listAuditEntries`, or nests `isRecovery` deeper), update the call and assertion accordingly.
3. If your audit log for `syncRollout` appends multiple entries, you may want to assert against the last entry instead of `[0]` (e.g. `auditEntries[auditEntries.length - 1]`).
4. If your existing tests use a different rollout fixture for this scenario, align `"session-skip-recovery"` with the appropriate fixture name so that the rollout is considered “already processed” in the same way as the matching-marker test.
</issue_to_address>
### Comment 5
<location path="test/session-continuity.test.ts" line_range="922-945" />
<code_context>
expect(sanitized.project.confirmedWorking.join("\n")).not.toContain("12345678901234567890");
});
+ it("applySessionContinuityLayerSummary clears stale goals when the next layer leaves goal empty", () => {
+ const base = {
+ ...createEmptySessionContinuityState("project-local", "project-1", "worktree-1"),
+ goal: "Stale local goal",
+ incompleteNext: ["Carry over the old next step"]
+ };
+
+ const merged = applySessionContinuityLayerSummary(
+ base,
+ {
+ goal: "",
+ confirmedWorking: [],
+ triedAndFailed: [],
+ notYetTried: [],
+ incompleteNext: ["Fresh next step"],
+ filesDecisionsEnvironment: []
+ },
+ "session-clear-goal"
+ );
+
+ expect(merged.goal).toBe("");
+ expect(merged.incompleteNext).toContain("Fresh next step");
+ expect(merged.incompleteNext).toContain("Carry over the old next step");
+ });
+
</code_context>
<issue_to_address>
**suggestion (testing):** Add a test distinguishing between omitted `goal` and explicitly empty `goal` in layer summaries
Given the new `goalProvided` logic in `buildLayerSummary`, please add a complementary test where the next-layer summary omits `goal` entirely. That test should verify that the existing goal is preserved when `goal` is not present, contrasting with the current case where `goal: ""` clears it. This will help catch regressions if the summary-building logic is refactored later.
```suggestion
it("applySessionContinuityLayerSummary clears stale goals when the next layer leaves goal empty", () => {
const base = {
...createEmptySessionContinuityState("project-local", "project-1", "worktree-1"),
goal: "Stale local goal",
incompleteNext: ["Carry over the old next step"]
};
const merged = applySessionContinuityLayerSummary(
base,
{
goal: "",
confirmedWorking: [],
triedAndFailed: [],
notYetTried: [],
incompleteNext: ["Fresh next step"],
filesDecisionsEnvironment: []
},
"session-clear-goal"
);
expect(merged.goal).toBe("");
expect(merged.incompleteNext).toContain("Fresh next step");
expect(merged.incompleteNext).toContain("Carry over the old next step");
});
it("applySessionContinuityLayerSummary preserves existing goal when the next layer omits goal", () => {
const base = {
...createEmptySessionContinuityState("project-local", "project-1", "worktree-1"),
goal: "Stale local goal",
incompleteNext: ["Carry over the old next step"]
};
const merged = applySessionContinuityLayerSummary(
base,
{
// goal intentionally omitted to represent "not provided" in this layer
confirmedWorking: [],
triedAndFailed: [],
notYetTried: [],
incompleteNext: ["Fresh next step"],
filesDecisionsEnvironment: []
},
"session-preserve-goal-when-omitted"
);
expect(merged.goal).toBe("Stale local goal");
expect(merged.incompleteNext).toContain("Fresh next step");
expect(merged.incompleteNext).toContain("Carry over the old next step");
});
```
</issue_to_address>
### Comment 6
<location path="test/memory-store.test.ts" line_range="123-126" />
<code_context>
});
- it("does not report a memory file as startup-loaded when only header lines fit", async () => {
+ it("skips partial scope blocks when the startup budget cannot fit quoted lines", async () => {
const projectDir = await tempDir("cam-store-startup-header-only-");
const memoryRoot = await tempDir("cam-store-startup-header-only-mem-");
</code_context>
<issue_to_address>
**suggestion (testing):** Also assert that the startup preamble is still present under tiny budgets
To better cover `appendWithinBudget` in `compileStartupMemory`, add an assertion that the returned text still contains part of the static preamble (for example `# Codex Auto Memory`) even under very small budgets. That guards against regressions where we return an empty or malformed startup block.
Suggested implementation:
```typescript
it("skips partial scope blocks when the startup budget cannot fit quoted lines", async () => {
const projectDir = await tempDir("cam-store-startup-header-only-");
const memoryRoot = await tempDir("cam-store-startup-header-only-mem-");
const config: AppConfig = {
const startup = await compileStartupMemory(store, 8);
expect(startup.lineCount).toBeLessThanOrEqual(8);
// Even under very small budgets, the static startup preamble should still be present.
expect(startup.text).toContain("# Codex Auto Memory");
expect(startup.text).not.toContain("## Project Local");
expect(startup.text).not.toContain("| # Project Local Memory");
expect(startup.sourceFiles).toEqual([]);
```
If the actual static preamble string in `compileStartupMemory` differs (for example it includes trailing spaces, different capitalization, or additional prefix text), update `"# Codex Auto Memory"` in the new expectation to match the exact literal used in the startup preamble so the test remains robust.
</issue_to_address>
### Comment 7
<location path="docs/progress-log.md" line_range="237" />
<code_context>
+
+- Added explicit `isRecovery` provenance to durable sync audit entries so reviewer surfaces no longer need to infer recovery follow-through indirectly.
+- Chinese next-step extraction now skips very short captures plus several narrative-connector fragments before they can land in continuity evidence.
+- Official Codex and Claude public docs were re-checked again before refreshing migration and reviewer wording, keeping the repository companion-first and supportable.
+
+### Post-alpha.22 review hardening
</code_context>
<issue_to_address>
**nitpick (typo):** Consider removing the redundancy in "re-checked again".
"Re-checked" already implies "again". Consider using either "were checked again" or "were re-checked" for cleaner wording.
```suggestion
- Official Codex and Claude public docs were re-checked before refreshing migration and reviewer wording, keeping the repository companion-first and supportable.
```
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
| if (scopeTopicRefs.length > 0) { | ||
| appendBlock([ | ||
| const [firstTopicRef, ...remainingTopicRefs] = scopeTopicRefs; | ||
| if (!firstTopicRef) { | ||
| const finalText = lines.join("\n").trimEnd(); | ||
| const finalLines = finalText ? finalText.split("\n") : []; | ||
| return { |
There was a problem hiding this comment.
suggestion: The !firstTopicRef branch after checking scopeTopicRefs.length > 0 appears unreachable.
Within the if (scopeTopicRefs.length > 0) block you destructure [firstTopicRef, ...remainingTopicRefs], so firstTopicRef is guaranteed to be defined. The subsequent if (!firstTopicRef) early return is therefore dead code. Consider removing that branch or the outer length check to simplify the control flow and avoid duplicate finalization logic.
| const explicitSuccessExitCodePattern = /(Process exited with code 0|\bexit(?:ed)? code 0\b)/iu; | ||
| const explicitFailureExitCodePattern = /(Process exited with code [1-9]\d*|\bexit(?:ed)? code [1-9]\d*\b)/iu; | ||
|
|
||
| export function extractCommand(toolCall: RolloutToolCall): string | null { | ||
| try { |
There was a problem hiding this comment.
issue (bug_risk): Ordering of explicit success vs. failure heuristics may misclassify mixed outputs.
Given the current order (explicit failure exit codes → explicit success exit codes → commandFailurePattern → commandSuccessPattern), an output like "exit code 0, but 1 error" would be treated as success because explicitSuccessExitCodePattern matches before the \b[1-9]\d*\s+errors?\b part of commandFailurePattern. If such mixed outputs are realistic, consider checking failure patterns (or a specific guard for this combination) before treating exit code 0 as definitive success.
| it("prefers failure when output contains both pass and fail markers", () => { | ||
| const toolCall = { | ||
| name: "exec_command", | ||
| arguments: JSON.stringify({ cmd: "pnpm test" }), | ||
| output: ["PASS src/a.test.ts", "FAIL src/b.test.ts", "Process exited with code 1"].join("\n") | ||
| }; | ||
|
|
||
| expect(commandSucceeded(toolCall)).toBe(false); | ||
| expect(commandFailed(toolCall)).toBe(true); | ||
| }); |
There was a problem hiding this comment.
suggestion (testing): Consider adding a test for explicit success exit codes combined with generic error wording
Current tests cover precedence between PASS/FAIL markers and exit codes, including the ambiguous "PASS/FAIL" wording. One remaining gap is when stdout has an explicit success exit code (0) but also a generic error word like "error" or "failed" outside an exit-code line. Since the failure pattern still matches error|failed|failure, an extra case such as "Process exited with code 0\nSome error text here" would help ensure explicitSuccessExitCodePattern continues to override incidental error wording as intended.
| it("clears a matching recovery marker when an already-processed rollout is retried", async () => { | ||
| const projectDir = await tempDir("cam-sync-skip-recovery-project-"); | ||
| const memoryRoot = await tempDir("cam-sync-skip-recovery-memory-"); | ||
| const rolloutPath = path.join(projectDir, "rollout.jsonl"); | ||
| await fs.writeFile(rolloutPath, rolloutFixture(projectDir, "session-skip-recovery"), "utf8"); | ||
|
|
||
| const service = new SyncService( | ||
| detectProjectContext(projectDir), | ||
| baseConfig(memoryRoot), | ||
| path.resolve("schemas/memory-operations.schema.json") |
There was a problem hiding this comment.
suggestion (testing): Add a complementary test for non-matching recovery markers to ensure they are not treated as recovery
You already test the positive recovery-marker cases well. To fully cover the contract, we also need the negative case: when a recovery record exists but does not match the current rollout/session (e.g., different sessionId or rolloutPath), syncRollout should leave that record intact and isRecovery should remain false. Please add a test that writes a non-matching recovery record, runs syncRollout, and asserts that (1) the recovery record is still present and (2) the resulting audit entry has isRecovery !== true to lock in this behavior.
Suggested implementation:
const result = await service.syncRollout(rolloutPath, true);
expect(result.skipped).toBe(false);
expect(await service.memoryStore.readSyncRecoveryRecord()).toBeNull();
});
it("does not treat non-matching recovery markers as recovery and leaves them intact", async () => {
const projectDir = await tempDir("cam-sync-non-matching-recovery-project-");
const memoryRoot = await tempDir("cam-sync-non-matching-recovery-memory-");
const rolloutPath = path.join(projectDir, "rollout.jsonl");
await fs.writeFile(rolloutPath, rolloutFixture(projectDir, "session-skip-recovery"), "utf8");
const service = new SyncService(
detectProjectContext(projectDir),
baseConfig(memoryRoot),
path.resolve("schemas/memory-operations.schema.json")
);
// write a recovery record that does NOT match the upcoming rollout/session
await service.memoryStore.writeSyncRecoveryRecord({
sessionId: "some-other-session-id",
rolloutPath: "/different/project/rollout.jsonl",
createdAt: new Date().toISOString(),
});
const result = await service.syncRollout(rolloutPath, true);
// rollout should proceed normally (not skipped, not treated as recovery)
expect(result.skipped).toBe(false);
// the non-matching recovery record should remain
const recoveryRecord = await service.memoryStore.readSyncRecoveryRecord();
expect(recoveryRecord).not.toBeNull();
expect(recoveryRecord?.sessionId).toBe("some-other-session-id");
// and the resulting audit entry should not be marked as recovery
const auditEntries = await service.memoryStore.readAuditEntries();
expect(auditEntries[0]?.isRecovery).not.toBe(true);
});
it("clears a matching recovery marker when an already-processed rollout is retried", async () => {
const projectDir = await tempDir("cam-sync-skip-recovery-project-");
const memoryRoot = await tempDir("cam-sync-skip-recovery-memory-");
const rolloutPath = path.join(projectDir, "rollout.jsonl");
await fs.writeFile(rolloutPath, rolloutFixture(projectDir, "session-skip-recovery"), "utf8");
const service = new SyncService(
detectProjectContext(projectDir),
baseConfig(memoryRoot),
path.resolve("schemas/memory-operations.schema.json")
);The new test assumes the following helpers/APIs exist and match your existing tests:
service.memoryStore.writeSyncRecoveryRecord({ sessionId, rolloutPath, createdAt })– mirror the shape used in your "clears a matching recovery marker" test; adjust field names or arguments if your recovery record type differs.service.memoryStore.readAuditEntries()returning an array of audit entries with anisRecoveryboolean flag; if your API is named differently (e.g.readAuditLog,listAuditEntries, or nestsisRecoverydeeper), update the call and assertion accordingly.- If your audit log for
syncRolloutappends multiple entries, you may want to assert against the last entry instead of[0](e.g.auditEntries[auditEntries.length - 1]). - If your existing tests use a different rollout fixture for this scenario, align
"session-skip-recovery"with the appropriate fixture name so that the rollout is considered “already processed” in the same way as the matching-marker test.
| it("applySessionContinuityLayerSummary clears stale goals when the next layer leaves goal empty", () => { | ||
| const base = { | ||
| ...createEmptySessionContinuityState("project-local", "project-1", "worktree-1"), | ||
| goal: "Stale local goal", | ||
| incompleteNext: ["Carry over the old next step"] | ||
| }; | ||
|
|
||
| const merged = applySessionContinuityLayerSummary( | ||
| base, | ||
| { | ||
| goal: "", | ||
| confirmedWorking: [], | ||
| triedAndFailed: [], | ||
| notYetTried: [], | ||
| incompleteNext: ["Fresh next step"], | ||
| filesDecisionsEnvironment: [] | ||
| }, | ||
| "session-clear-goal" | ||
| ); | ||
|
|
||
| expect(merged.goal).toBe(""); | ||
| expect(merged.incompleteNext).toContain("Fresh next step"); | ||
| expect(merged.incompleteNext).toContain("Carry over the old next step"); | ||
| }); |
There was a problem hiding this comment.
suggestion (testing): Add a test distinguishing between omitted goal and explicitly empty goal in layer summaries
Given the new goalProvided logic in buildLayerSummary, please add a complementary test where the next-layer summary omits goal entirely. That test should verify that the existing goal is preserved when goal is not present, contrasting with the current case where goal: "" clears it. This will help catch regressions if the summary-building logic is refactored later.
| it("applySessionContinuityLayerSummary clears stale goals when the next layer leaves goal empty", () => { | |
| const base = { | |
| ...createEmptySessionContinuityState("project-local", "project-1", "worktree-1"), | |
| goal: "Stale local goal", | |
| incompleteNext: ["Carry over the old next step"] | |
| }; | |
| const merged = applySessionContinuityLayerSummary( | |
| base, | |
| { | |
| goal: "", | |
| confirmedWorking: [], | |
| triedAndFailed: [], | |
| notYetTried: [], | |
| incompleteNext: ["Fresh next step"], | |
| filesDecisionsEnvironment: [] | |
| }, | |
| "session-clear-goal" | |
| ); | |
| expect(merged.goal).toBe(""); | |
| expect(merged.incompleteNext).toContain("Fresh next step"); | |
| expect(merged.incompleteNext).toContain("Carry over the old next step"); | |
| }); | |
| it("applySessionContinuityLayerSummary clears stale goals when the next layer leaves goal empty", () => { | |
| const base = { | |
| ...createEmptySessionContinuityState("project-local", "project-1", "worktree-1"), | |
| goal: "Stale local goal", | |
| incompleteNext: ["Carry over the old next step"] | |
| }; | |
| const merged = applySessionContinuityLayerSummary( | |
| base, | |
| { | |
| goal: "", | |
| confirmedWorking: [], | |
| triedAndFailed: [], | |
| notYetTried: [], | |
| incompleteNext: ["Fresh next step"], | |
| filesDecisionsEnvironment: [] | |
| }, | |
| "session-clear-goal" | |
| ); | |
| expect(merged.goal).toBe(""); | |
| expect(merged.incompleteNext).toContain("Fresh next step"); | |
| expect(merged.incompleteNext).toContain("Carry over the old next step"); | |
| }); | |
| it("applySessionContinuityLayerSummary preserves existing goal when the next layer omits goal", () => { | |
| const base = { | |
| ...createEmptySessionContinuityState("project-local", "project-1", "worktree-1"), | |
| goal: "Stale local goal", | |
| incompleteNext: ["Carry over the old next step"] | |
| }; | |
| const merged = applySessionContinuityLayerSummary( | |
| base, | |
| { | |
| // goal intentionally omitted to represent "not provided" in this layer | |
| confirmedWorking: [], | |
| triedAndFailed: [], | |
| notYetTried: [], | |
| incompleteNext: ["Fresh next step"], | |
| filesDecisionsEnvironment: [] | |
| }, | |
| "session-preserve-goal-when-omitted" | |
| ); | |
| expect(merged.goal).toBe("Stale local goal"); | |
| expect(merged.incompleteNext).toContain("Fresh next step"); | |
| expect(merged.incompleteNext).toContain("Carry over the old next step"); | |
| }); |
| it("skips partial scope blocks when the startup budget cannot fit quoted lines", async () => { | ||
| const projectDir = await tempDir("cam-store-startup-header-only-"); | ||
| const memoryRoot = await tempDir("cam-store-startup-header-only-mem-"); | ||
| const config: AppConfig = { |
There was a problem hiding this comment.
suggestion (testing): Also assert that the startup preamble is still present under tiny budgets
To better cover appendWithinBudget in compileStartupMemory, add an assertion that the returned text still contains part of the static preamble (for example # Codex Auto Memory) even under very small budgets. That guards against regressions where we return an empty or malformed startup block.
Suggested implementation:
it("skips partial scope blocks when the startup budget cannot fit quoted lines", async () => {
const projectDir = await tempDir("cam-store-startup-header-only-");
const memoryRoot = await tempDir("cam-store-startup-header-only-mem-");
const config: AppConfig = {
const startup = await compileStartupMemory(store, 8);
expect(startup.lineCount).toBeLessThanOrEqual(8);
// Even under very small budgets, the static startup preamble should still be present.
expect(startup.text).toContain("# Codex Auto Memory");
expect(startup.text).not.toContain("## Project Local");
expect(startup.text).not.toContain("| # Project Local Memory");
expect(startup.sourceFiles).toEqual([]);If the actual static preamble string in compileStartupMemory differs (for example it includes trailing spaces, different capitalization, or additional prefix text), update "# Codex Auto Memory" in the new expectation to match the exact literal used in the startup preamble so the test remains robust.
|
|
||
| - Added explicit `isRecovery` provenance to durable sync audit entries so reviewer surfaces no longer need to infer recovery follow-through indirectly. | ||
| - Chinese next-step extraction now skips very short captures plus several narrative-connector fragments before they can land in continuity evidence. | ||
| - Official Codex and Claude public docs were re-checked again before refreshing migration and reviewer wording, keeping the repository companion-first and supportable. |
There was a problem hiding this comment.
nitpick (typo): Consider removing the redundancy in "re-checked again".
"Re-checked" already implies "again". Consider using either "were checked again" or "were re-checked" for cleaner wording.
| - Official Codex and Claude public docs were re-checked again before refreshing migration and reviewer wording, keeping the repository companion-first and supportable. | |
| - Official Codex and Claude public docs were re-checked before refreshing migration and reviewer wording, keeping the repository companion-first and supportable. |
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (2)
src/lib/domain/session-continuity.ts (1)
87-107: Consider extracting sharedappendWithinBudgethelper.This function is duplicated verbatim in
startup-memory.ts(lines 27-47). Consider extracting it to a shared utility module (e.g.,src/lib/util/text.tsor a newsrc/lib/util/budget.ts) to eliminate duplication.♻️ Suggested extraction
// src/lib/util/budget.ts export function appendWithinBudget( lines: string[], blockLines: string[], maxLines: number, minimumLines = 1 ): number { if (maxLines - lines.length < minimumLines) { return 0; } let appended = 0; for (const line of blockLines) { if (lines.length >= maxLines) { break; } lines.push(line); appended += 1; } return appended; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/domain/session-continuity.ts` around lines 87 - 107, The appendWithinBudget function is duplicated in session-continuity.ts and startup-memory.ts; extract it to a single shared utility (e.g., new module exporting appendWithinBudget) and replace the local copies with imports. Create a new util module that exports function appendWithinBudget(lines, blockLines, maxLines, minimumLines = 1) and update both session-continuity.ts and startup-memory.ts to import and call that exported function instead of maintaining their own copies.src/lib/domain/startup-memory.ts (1)
27-47: Duplicate ofappendWithinBudgetin session-continuity.ts.As noted in the review of
session-continuity.ts, this function is duplicated. Consider extracting to a shared utility.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/domain/startup-memory.ts` around lines 27 - 47, Duplicate function appendWithinBudget exists here and in session-continuity.ts; extract it to a shared utility module (e.g., create a new helper file exporting appendWithinBudget), move the implementation there, and replace the local implementations in startup-memory.ts and session-continuity.ts with an import of the shared function; ensure the function is exported (named export) and both files import that symbol and remove the duplicated code.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/lib/domain/memory-store.ts`:
- Around line 530-534: The current try/catch around
readJsonFile(this.paths.stateFile) swallows all errors and treats them as "empty
state"; change it so only expected benign conditions (file-not-found or JSON
parse errors) map to normalizeSyncState(null) while other IO errors are
propagated. Update the block around readJsonFile/normalizeSyncState: await
readJsonFile<SyncState>(this.paths.stateFile) inside try, catch the thrown error
and inspect it (e.g., if (err.code === 'ENOENT') return
normalizeSyncState(null); if (err instanceof SyntaxError || error message
indicates JSON parse) return normalizeSyncState(null); else throw err). This
preserves existing behavior for missing/malformed state but surfaces real
filesystem errors instead of hiding them.
In `@src/lib/domain/sync-service.ts`:
- Around line 87-93: The recovery marker is being cleared via
clearSyncRecoveryRecordBestEffort({ rolloutPath, sessionId: evidence.sessionId
}) before store.appendSyncAuditEntry(...) so a failed append can leave the
marker incorrectly removed; change the ordering so you first await
this.store.appendSyncAuditEntry(...) and only after that succeeds call
clearSyncRecoveryRecordBestEffort with the same rolloutPath and sessionId,
ensuring any thrown error from append prevents removal of the recovery marker
(i.e., move the clearSyncRecoveryRecordBestEffort call to after the append and
do not swallow append errors).
---
Nitpick comments:
In `@src/lib/domain/session-continuity.ts`:
- Around line 87-107: The appendWithinBudget function is duplicated in
session-continuity.ts and startup-memory.ts; extract it to a single shared
utility (e.g., new module exporting appendWithinBudget) and replace the local
copies with imports. Create a new util module that exports function
appendWithinBudget(lines, blockLines, maxLines, minimumLines = 1) and update
both session-continuity.ts and startup-memory.ts to import and call that
exported function instead of maintaining their own copies.
In `@src/lib/domain/startup-memory.ts`:
- Around line 27-47: Duplicate function appendWithinBudget exists here and in
session-continuity.ts; extract it to a shared utility module (e.g., create a new
helper file exporting appendWithinBudget), move the implementation there, and
replace the local implementations in startup-memory.ts and session-continuity.ts
with an import of the shared function; ensure the function is exported (named
export) and both files import that symbol and remove the duplicated code.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 7a4924c5-9b11-4a67-bdf2-e9541eb6fd04
📒 Files selected for processing (27)
.gitignoreAGENTS.mdCHANGELOG.mddocs/claude-reference.mddocs/native-migration.en.mddocs/native-migration.mddocs/next-phase-brief.mddocs/progress-log.mddocs/reviewer-handoff.mdsrc/lib/domain/memory-store.tssrc/lib/domain/memory-sync-audit.tssrc/lib/domain/recovery-records.tssrc/lib/domain/session-continuity.tssrc/lib/domain/startup-memory.tssrc/lib/domain/sync-service.tssrc/lib/extractor/command-utils.tssrc/lib/extractor/session-continuity-evidence.tssrc/lib/extractor/session-continuity-summarizer.tssrc/lib/types.tstest/audit.test.tstest/command-utils.test.tstest/memory-command.test.tstest/memory-store.test.tstest/project-context.test.tstest/session-command.test.tstest/session-continuity.test.tstest/sync-service.test.ts
| try { | ||
| return normalizeSyncState(await readJsonFile<SyncState>(this.paths.stateFile)); | ||
| } catch { | ||
| return normalizeSyncState(null); | ||
| } |
There was a problem hiding this comment.
Don't collapse every state-file read error into "empty state".
Because src/lib/util/fs.ts's readJsonFile() also throws on fs.readFile(), Lines 530-534 now suppress storage failures like EACCES/EIO the same way as malformed JSON. That can make sync treat already-processed rollouts as new instead of surfacing the real failure.
Suggested narrowing
public async getSyncState(): Promise<Required<SyncState>> {
try {
return normalizeSyncState(await readJsonFile<SyncState>(this.paths.stateFile));
- } catch {
- return normalizeSyncState(null);
+ } catch (error) {
+ const errno = error as NodeJS.ErrnoException;
+ if (error instanceof SyntaxError || errno.code === "ENOENT") {
+ return normalizeSyncState(null);
+ }
+ throw error;
}
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/lib/domain/memory-store.ts` around lines 530 - 534, The current try/catch
around readJsonFile(this.paths.stateFile) swallows all errors and treats them as
"empty state"; change it so only expected benign conditions (file-not-found or
JSON parse errors) map to normalizeSyncState(null) while other IO errors are
propagated. Update the block around readJsonFile/normalizeSyncState: await
readJsonFile<SyncState>(this.paths.stateFile) inside try, catch the thrown error
and inspect it (e.g., if (err.code === 'ENOENT') return
normalizeSyncState(null); if (err instanceof SyntaxError || error message
indicates JSON parse) return normalizeSyncState(null); else throw err). This
preserves existing behavior for missing/malformed state but surfaces real
filesystem errors instead of hiding them.
| if (isRecovery) { | ||
| await this.clearSyncRecoveryRecordBestEffort({ | ||
| rolloutPath, | ||
| sessionId: evidence.sessionId | ||
| }); | ||
| } | ||
| await this.store.appendSyncAuditEntry( |
There was a problem hiding this comment.
Defer recovery-marker cleanup until after skipped-audit append succeeds.
At Line 87, the matching recovery marker is cleared before the skipped audit write. If appendSyncAuditEntry fails at Line 93, the marker is lost even though the retry path did not complete cleanly.
💡 Proposed ordering fix
if (!force && (await this.store.hasProcessedRollout(processedIdentity))) {
- if (isRecovery) {
- await this.clearSyncRecoveryRecordBestEffort({
- rolloutPath,
- sessionId: evidence.sessionId
- });
- }
await this.store.appendSyncAuditEntry(
buildMemorySyncAuditEntry({
project: this.project,
config: this.config,
@@
...(isRecovery ? { isRecovery: true } : {})
})
);
+ if (isRecovery) {
+ await this.clearSyncRecoveryRecordBestEffort({
+ rolloutPath,
+ sessionId: evidence.sessionId
+ });
+ }
return {
applied: [],
skipped: true,
message: `Skipped ${rolloutPath}; it was already processed.`
};🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/lib/domain/sync-service.ts` around lines 87 - 93, The recovery marker is
being cleared via clearSyncRecoveryRecordBestEffort({ rolloutPath, sessionId:
evidence.sessionId }) before store.appendSyncAuditEntry(...) so a failed append
can leave the marker incorrectly removed; change the ordering so you first await
this.store.appendSyncAuditEntry(...) and only after that succeeds call
clearSyncRecoveryRecordBestEffort with the same rolloutPath and sessionId,
ensuring any thrown error from append prevents removal of the recovery marker
(i.e., move the clearSyncRecoveryRecordBestEffort call to after the append and
do not swallow append errors).
| const captured = normalizeMessage(match[1]); | ||
| if (captured.length < 10) continue; | ||
| if (/^(?:而是|但是|因为|所以|不过|然后|其实|就是|也就是说)/u.test(captured)) continue; | ||
| matches.push(captured); |
There was a problem hiding this comment.
The new 10-character cutoff is too blunt.
Line 70 now drops every short capture, not just the Chinese connector fragments from Line 71. That strips valid next steps like run tests, fix lint, or 重跑测试, so continuity can lose the most actionable handoff item.
Scope the length guard to the CJK false-positive case
const captured = normalizeMessage(match[1]);
- if (captured.length < 10) continue;
- if (/^(?:而是|但是|因为|所以|不过|然后|其实|就是|也就是说)/u.test(captured)) continue;
+ if (/^(?:而是|但是|因为|所以|不过|然后|其实|就是|也就是说)/u.test(captured)) {
+ continue;
+ }
+ if (/[\u4E00-\u9FFF]/u.test(captured) && captured.length < 6) {
+ continue;
+ }
matches.push(captured);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const captured = normalizeMessage(match[1]); | |
| if (captured.length < 10) continue; | |
| if (/^(?:而是|但是|因为|所以|不过|然后|其实|就是|也就是说)/u.test(captured)) continue; | |
| matches.push(captured); | |
| const captured = normalizeMessage(match[1]); | |
| if (/^(?:而是|但是|因为|所以|不过|然后|其实|就是|也就是说)/u.test(captured)) { | |
| continue; | |
| } | |
| if (/[\u4E00-\u9FFF]/u.test(captured) && captured.length < 6) { | |
| continue; | |
| } | |
| matches.push(captured); |
fix: harden sync recovery and reviewer contracts
Summary
Validation
pnpm lintpnpm testpnpm buildpnpm exec tsx src/cli.ts doctor --jsonpnpm exec tsx src/cli.ts memory --json --recent 5pnpm exec tsx src/cli.ts session save --jsonpnpm exec tsx src/cli.ts session load --jsonpnpm exec tsx src/cli.ts session status --jsonpnpm exec tsx src/cli.ts audit --jsonNotes
cam doctor --jsonstill reportsmemoriesandcodex_hooksasunder developmentand disabled.cam audit --jsoncompleted withhigh=0andmedium=0.Summary by Sourcery
Harden durable sync, recovery, and session continuity behavior while tightening reviewer-facing contracts and docs.
Bug Fixes:
Enhancements:
Documentation:
Tests:
Summary by cubic
Hardens durable sync recovery and session continuity so reviewer surfaces stay reliable under edge cases. Prefer explicit exit codes, handle corrupted state safely, respect tiny line budgets, and mark audit entries with recovery provenance.
Bug Fixes
state.jsondegrades to an empty state instead of blocking sync.New Features
isRecoveryand show a[recovery]tag in text output.Written for commit 7aa24a9. Summary will update on new commits.
Summary by CodeRabbit
Release Notes
New Features
Bug Fixes
Changed
Documentation