Conversation
📝 WalkthroughWalkthroughThis PR introduces comprehensive test suites for eight CLI tools, covering CLI argument validation, file processing, Git operations, API interactions, WebSocket communication, and file system monitoring through temporary test repositories and Bun-based script execution. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Comment |
Summary of ChangesHello @genesiscz, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the reliability and maintainability of several core tools by introducing dedicated test suites for each. The newly added tests cover a wide range of functionalities, including command-line argument parsing, file system interactions, Git operations, API calls, WebSocket communication, and content processing. This effort aims to ensure that these tools behave as expected under various conditions and to prevent regressions in future development. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces a comprehensive set of test suites for eight core tools, significantly improving the project's test coverage and reliability. The tests are generally well-structured, using temporary environments and process spawning to accurately simulate real-world usage. However, there are several critical issues that need attention. The test file for github-release-notes is entirely commented out, and the test for t3chat-length is testing a different implementation of the function than what is in the source file. Additionally, a key feature in git-last-commits-diff (interactive mode) has its tests commented out. I've left detailed comments on these points and some other medium-severity suggestions for improving test robustness and code cleanliness.
| /* | ||
| describe("github-release-notes", () => { | ||
| let testDir: string; | ||
|
|
||
| beforeEach(async () => { | ||
| const baseTmpDir = realpathSync(tmpdir()); | ||
| testDir = await mkdtemp(join(baseTmpDir, "test-gh-releases-")); | ||
| axiosGetSpy.mockReset(); // Reset spy before each test | ||
| }); | ||
|
|
||
| afterEach(async () => { | ||
| if (testDir) { | ||
| await rm(testDir, { recursive: true, force: true }); | ||
| } | ||
| }); | ||
|
|
||
| it("should show help with --help flag", async () => { | ||
| const { stdout, exitCode } = await runScript(["--help"]); | ||
| expect(exitCode).toBe(0); | ||
| expect(stdout).toContain("Usage: tools github-release-notes <owner>/<repo>|<github-url> <output-file>"); | ||
| }); | ||
|
|
||
| it("should show help if no arguments are provided", async () => { | ||
| const { stdout, exitCode } = await runScript([]); | ||
| expect(exitCode).toBe(0); // Script exits 0 with help | ||
| expect(stdout).toContain("Usage: tools github-release-notes <owner>/<repo>|<github-url> <output-file>"); | ||
| }); | ||
|
|
||
| it("should exit with error for invalid repo format", async () => { | ||
| const outputFile = join(testDir, "out.md"); | ||
| const { stdout, stderr, exitCode } = await runScript(["invalid-repo", outputFile]); | ||
| expect(exitCode).toBe(1); | ||
| // The script's logger outputs to stdout by default for info/error | ||
| expect(stdout).toContain("Invalid repository format."); | ||
| }); | ||
|
|
||
| it("should fetch and write release notes to file (newest first by default)", async () => { | ||
| axiosGetSpy.mockResolvedValue({ data: mockReleases }); | ||
| const outputFile = join(testDir, "releases.md"); | ||
|
|
||
| const { exitCode, stdout, stderr } = await runScript(["testowner/testrepo", outputFile]); | ||
|
|
||
| // console.log("STDOUT:", stdout); | ||
| // console.log("STDERR:", stderr); | ||
| expect(exitCode).toBe(0); | ||
| expect(axiosGetSpy).toHaveBeenCalledWith( | ||
| "https://api.github.com/repos/testowner/testrepo/releases?per_page=100&page=1", | ||
| expect.any(Object) | ||
| ); | ||
|
|
||
| const content = await readFile(outputFile, "utf-8"); | ||
| expect(content).toContain("# Release Notes: testowner/testrepo"); | ||
| expect(content).toMatch(/## \[v1\.1\.0\].*?2023-02-20.*?Update features for 1\.1\.0\./s); | ||
| expect(content).toMatch(/## \[v1\.0\.0\].*?2023-01-15.*?Initial release content\./s); | ||
| expect(content).toMatch(/## \[v0\.9\.0\].*?2022-12-25.*?Beta release content\./s); | ||
| // Check order (newest first) | ||
| expect(content.indexOf("v1.1.0")).toBeLessThan(content.indexOf("v1.0.0")); | ||
| expect(content.indexOf("v1.0.0")).toBeLessThan(content.indexOf("v0.9.0")); | ||
| }); | ||
|
|
||
| it("should fetch and write release notes sorted oldest first with --oldest", async () => { | ||
| axiosGetSpy.mockResolvedValue({ data: mockReleases }); | ||
| const outputFile = join(testDir, "releases_oldest.md"); | ||
|
|
||
| const { exitCode } = await runScript(["testowner/testrepo", outputFile, "--oldest"]); | ||
| expect(exitCode).toBe(0); | ||
|
|
||
| const content = await readFile(outputFile, "utf-8"); | ||
| expect(content).toContain("# Release Notes: testowner/testrepo"); | ||
| // Check order (oldest first) | ||
| expect(content.indexOf("v0.9.0")).toBeLessThan(content.indexOf("v1.0.0")); | ||
| expect(content.indexOf("v1.0.0")).toBeLessThan(content.indexOf("v1.1.0")); | ||
| }); | ||
|
|
||
| it("should limit releases with --limit", async () => { | ||
| axiosGetSpy.mockResolvedValue({ data: mockReleases }); | ||
| const outputFile = join(testDir, "releases_limit.md"); | ||
|
|
||
| const { exitCode } = await runScript(["testowner/testrepo", outputFile, "--limit=2"]); | ||
| expect(exitCode).toBe(0); | ||
|
|
||
| const content = await readFile(outputFile, "utf-8"); | ||
| // Default sort is newest first. With limit 2, we expect v1.1.0 and v1.0.0 | ||
| expect(content).toContain("v1.1.0"); | ||
| expect(content).toContain("v1.0.0"); | ||
| expect(content).not.toContain("v0.9.0"); | ||
| }); | ||
|
|
||
| it("should use GITHUB_TOKEN if set", async () => { | ||
| axiosGetSpy.mockResolvedValue({ data: [] }); // No releases needed, just check headers | ||
| const outputFile = join(testDir, "out.md"); | ||
|
|
||
| await runScript(["testowner/testrepo", outputFile], { GITHUB_TOKEN: "test_token_123" }); | ||
|
|
||
| expect(axiosGetSpy).toHaveBeenCalledWith( | ||
| expect.any(String), | ||
| expect.objectContaining({ | ||
| headers: expect.objectContaining({ Authorization: "token test_token_123" }) | ||
| }) | ||
| ); | ||
| }); | ||
|
|
||
| it("should handle API error for 404 not found", async () => { | ||
| axiosGetSpy.mockRejectedValue({ | ||
| isAxiosError: true, | ||
| response: { status: 404, data: "Not Found" }, | ||
| message: "Request failed with status code 404" | ||
| }); | ||
| const outputFile = join(testDir, "out.md"); | ||
| const { stdout, stderr,exitCode } = await runScript(["testowner/testrepo", outputFile]); | ||
| expect(exitCode).toBe(1); | ||
| console.error("STDOUT:", stdout+" STDERR:"+stderr); | ||
| expect(stdout).toContain("Repository testowner/testrepo not found or no releases available."); | ||
| }); | ||
|
|
||
| it("should handle API error for 403 rate limit", async () => { | ||
| axiosGetSpy.mockRejectedValue({ | ||
| isAxiosError: true, | ||
| response: { status: 403, data: "API rate limit exceeded" }, | ||
| message: "Request failed with status code 403" | ||
| }); | ||
| const outputFile = join(testDir, "out.md"); | ||
| const { stdout, exitCode } = await runScript(["testowner/testrepo", outputFile]); | ||
| expect(exitCode).toBe(1); | ||
| console.error("STDOUT:", stdout); | ||
| expect(stdout).toContain("Rate limit exceeded."); | ||
| }); | ||
|
|
||
| it("should parse full GitHub URL for repo arg", async () => { | ||
| axiosGetSpy.mockResolvedValue({ data: [mockReleases[0]] }); | ||
| const outputFile = join(testDir, "releases_url.md"); | ||
|
|
||
| const { exitCode } = await runScript([ | ||
| "https://github.com/anotherowner/anotherrepo.git", | ||
| outputFile | ||
| ]); | ||
| expect(exitCode).toBe(0); | ||
| expect(axiosGetSpy).toHaveBeenCalledWith( | ||
| "https://api.github.com/repos/anotherowner/anotherrepo/releases?per_page=100&page=1", | ||
| expect.any(Object) | ||
| ); | ||
| const content = await readFile(outputFile, "utf-8"); | ||
| expect(content).toContain("# Release Notes: anotherowner/anotherrepo"); | ||
| }); | ||
|
|
||
| it("should correctly paginate if limit > 100 (mocking multiple pages)", async () => { | ||
| const manyReleasesPage1 = Array(100).fill(null).map((_, i) => ({ | ||
| ...mockReleases[0], tag_name: `v_page1_${i}`, | ||
| })); | ||
| const manyReleasesPage2 = Array(50).fill(null).map((_, i) => ({ | ||
| ...mockReleases[1], tag_name: `v_page2_${i}`, | ||
| })); | ||
|
|
||
| axiosGetSpy | ||
| .mockResolvedValueOnce({ data: manyReleasesPage1 }) | ||
| .mockResolvedValueOnce({ data: manyReleasesPage2 }) | ||
| .mockResolvedValueOnce({ data: [] }); // Empty page to stop pagination | ||
|
|
||
| const outputFile = join(testDir, "releases_paged.md"); | ||
| const { exitCode, stdout, stderr } = await runScript(["testowner/testrepo", outputFile, "--limit=150"]); | ||
|
|
||
| expect(exitCode).toBe(0); | ||
| expect(axiosGetSpy).toHaveBeenCalledTimes(3); // Page 1, Page 2, Page 3 (empty) | ||
| expect(axiosGetSpy.mock.calls[0][0]).toContain("page=1"); | ||
| expect(axiosGetSpy.mock.calls[1][0]).toContain("page=2"); | ||
| expect(axiosGetSpy.mock.calls[2][0]).toContain("page=3"); | ||
|
|
||
| const content = await readFile(outputFile, "utf-8"); | ||
| expect(content).toContain("v_page1_0"); | ||
| expect(content).toContain("v_page1_99"); | ||
| expect(content).toContain("v_page2_0"); | ||
| expect(content).toContain("v_page2_49"); | ||
| // Count occurrences of "## [v_page" | ||
| const releaseHeaders = content.match(/## \[v_page/g); | ||
| expect(releaseHeaders?.length).toBe(150); | ||
| }); | ||
| }); */ No newline at end of file |
There was a problem hiding this comment.
The entire describe block for these tests is commented out. As a result, no tests for the github-release-notes tool are actually being run. To fulfill the goal of adding a test suite for this tool, these tests should be enabled and ensured to be passing. If they are not ready, this file should be removed from the pull request for now.
| function processMessages(input: InputJson): OutputMessageInfo[] { | ||
| const messages = input.json.messages; | ||
|
|
||
| const messageInfo = messages.map((message) => { | ||
| const contentSizeBytes = new TextEncoder().encode(message.content).length; | ||
| const contentSizeKB = contentSizeBytes / 1024; | ||
| const threadLink = `https://t3.chat/chat/${message.threadId}`; | ||
|
|
||
| return { | ||
| threadLink, | ||
| contentSizeKB, | ||
| }; | ||
| }); | ||
|
|
||
| messageInfo.sort((a, b) => b.contentSizeKB - a.contentSizeKB); | ||
|
|
||
| return messageInfo; | ||
| } |
There was a problem hiding this comment.
This test file re-implements the processMessages function instead of importing it from the source file (src/t3chat-length/index.ts). More importantly, the implementation here is different from the one in the source file; the source aggregates results by threadId, while this version does not. This means the tests are not validating the actual production code, which defeats their purpose.
To fix this, the processMessages function and its related types should be exported from src/t3chat-length/index.ts and imported directly into this test file. This will ensure you are testing the correct implementation.
| // describe("Interactive Mode - Commit Selection", () => { | ||
| // beforeEach(async () => { | ||
| // await setupCommits([ | ||
| // { files: { "f1.txt": "a" }, message: "Short Commit A (sha_A)" }, | ||
| // { files: { "f1.txt": "b" }, message: "Short Commit B (sha_B)" }, | ||
| // ]); | ||
| // }); | ||
|
|
||
| // it("should generate diff if a commit is selected interactively", async () => { | ||
| // // Need HEAD SHA and the SHA of HEAD~1 | ||
| // const headShaRes = await runGit(["rev-parse", "--short", "HEAD"], testRepoDir); | ||
| // const prevShaRes = await runGit(["rev-parse", "--short", "HEAD~1"], testRepoDir); | ||
| // const headSha = headShaRes.stdout.trim(); | ||
| // const prevSha = prevShaRes.stdout.trim(); | ||
|
|
||
| // enquirerPromptSpy.mockImplementation(async (questions: any) => { | ||
| // if (questions.name === "selectedCommitValue") { | ||
| // // Simulate user selecting the second to last commit (HEAD~1) | ||
| // const choice = questions.choices.find((c: any) => c.name === prevSha); | ||
| // return { selectedCommitValue: choice ? choice.name : prevSha }; | ||
| // } | ||
| // if (questions.name === "selectedAction") { | ||
| // return { selectedAction: "stdout" }; // Default to stdout for this test | ||
| // } | ||
| // return {}; | ||
| // }); | ||
|
|
||
|
|
||
| // // Run without --commits to trigger interactive commit selection, and without output flags | ||
| // const { stdout, stderr, exitCode } = await runScript([testRepoDir, "--output", ""]); // Force stdout | ||
| // console.log("Interactive stdout:", stdout); | ||
| // console.log("Interactive stderr:", stderr); | ||
|
|
||
| // expect(exitCode).toBe(0); | ||
| // expect(enquirerPromptSpy).toHaveBeenCalledWith(expect.objectContaining({ name: 'selectedCommitValue' })); | ||
| // expect(stdout).toContain(`diff --git a/f1.txt b/f1.txt`); | ||
| // expect(stdout).toContain("-a"); // Content from prevSha (Commit A) | ||
| // expect(stdout).toContain("+b"); // Content from HEAD (Commit B) | ||
| // }); | ||
| // }); | ||
|
|
There was a problem hiding this comment.
The test suite for the interactive commit selection mode is completely commented out. Since this appears to be a core feature of the tool, it's important that it has test coverage. Please either complete the implementation of these tests (which might involve mocking the @inquirer/prompts library) or remove the commented-out code if this feature is not intended to be tested in this PR.
| } catch (e: any) { | ||
| if (e.code === 'ENOENT') return []; // If dir doesn't exist, return empty | ||
| throw e; | ||
| } |
There was a problem hiding this comment.
The catch block uses any for the error type. For better type safety, it's recommended to use unknown and then perform a type check before accessing properties on the error object. This pattern appears in several of the new test files.
For example:
} catch (e: unknown) {
if (e instanceof Error && 'code' in e && (e as { code: string }).code === 'ENOENT') {
return []; // If dir doesn't exist, return empty
}
throw e;
}| // console.log("Help STDOUT:", stdout); | ||
| // console.log("Help STDERR:", stderr); |
| await new Promise<void>((resolve, reject) => { | ||
| const interval = setInterval(() => { | ||
| if (connectedClientWS) { | ||
| clearInterval(interval); | ||
| resolve(); | ||
| } | ||
| }, 100); | ||
| setTimeout(5000, () => { | ||
| clearInterval(interval); | ||
| reject(new Error("Client did not connect to mock server in time")); | ||
| }); | ||
| }); |
There was a problem hiding this comment.
The current approach for waiting for a client to connect uses setInterval to poll a shared variable. This can be a bit fragile and dependent on timing. A more robust approach would be to resolve a promise from within the WebSocket server's connection event handler.
For example, you could modify startMockServer to return a promise that resolves with the client WebSocket instance upon connection, making the test setup cleaner and less prone to race conditions.
| const result = await runWatchScript(["*.nothing"], testDir); | ||
| currentProcess = result.process; | ||
| expect(result.stdout).not.toContain("Error: It appears your glob patterns may have been expanded by the shell"); | ||
| expect([1, null].includes(result.exitCode)).toBe(true); |
There was a problem hiding this comment.
The assertion expect([1, null].includes(result.exitCode)).toBe(true); seems potentially incorrect. The runWatchScript helper returns -1 on timeout for non-follow mode, and a process exit code (like 0 or 1) if it exits. The initial value of exitCode is null, but it should be updated by the Promise.race. This assertion might be too loose or incorrect, making the test less reliable. Please verify the expected exit codes in this scenario and make the assertion more specific. For instance, if the script is expected to exit with code 1 because no files match *.nothing, the assertion should be expect(result.exitCode).toBe(1);.
There was a problem hiding this comment.
Pull request overview
This PR aims to add Bun test coverage for several CLI tools under src/* by introducing new *.test.ts suites that exercise each tool’s behavior via process spawning and filesystem fixtures.
Changes:
- Added new Bun test files for 8 tools (
watch,t3chat-length,hold-aiclient/server,github-release-notes,git-last-commits-diff,files-to-prompt,collect-files-for-ai). - Implemented a variety of integration-style test helpers (temp dirs, git repo setup, child-process orchestration, WebSocket mock servers).
- Introduced output/behavior assertions for help text, error cases, and core tool workflows.
Reviewed changes
Copilot reviewed 8 out of 8 changed files in this pull request and generated 54 comments.
Show a summary per file
| File | Description |
|---|---|
| src/watch/index.test.ts | Adds integration tests for watch via spawning and temp filesystem setup. |
| src/t3chat-length/index.test.ts | Adds unit tests for message sizing/sorting logic (currently reimplemented in-test). |
| src/hold-ai/server.test.ts | Adds process-spawn WebSocket tests intended to validate server broadcast behavior. |
| src/hold-ai/client.test.ts | Adds mock WebSocket server tests intended to validate client output and reconnect behavior. |
| src/github-release-notes/index.test.ts | Adds a test scaffold for release-note fetching (suite currently commented out). |
| src/git-last-commits-diff/index.test.ts | Adds integration tests using a temporary git repository to validate diff output. |
| src/files-to-prompt/index.test.ts | Adds integration tests for formatting, filtering, and stdin handling. |
| src/collect-files-for-ai/index.test.ts | Adds integration tests using a temporary git repository to validate file collection/copying. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| it("should error if no glob pattern is provided", async () => { | ||
| const result = await runWatchScript([], testDir); | ||
| currentProcess = result.process; | ||
| expect(result.exitCode).toBe(1); | ||
| expect(result.stderr).toContain("Error: No glob pattern provided"); | ||
| expect(result.stdout).toContain("Use --help for usage information"); | ||
| }); |
There was a problem hiding this comment.
The tool logs errors via @app/logger (pino-pretty), which is written to stdout, but this test asserts on stderr for error cases. This will make the assertions fail even when the tool behaves correctly; assert against stdout (or change the tool to emit errors to stderr consistently).
| expect(result.stderr).toContain("Error: --commits value must be a positive integer."); | ||
|
|
||
| result = await runScript([testRepoDir, "--commits", "abc"]); | ||
| expect(result.exitCode).toBe(1); | ||
| expect(result.stderr).toContain("Error: --commits value must be a positive integer."); |
There was a problem hiding this comment.
These assertions expect validation errors in stderr, but the tool logs via @app/logger (pino-pretty), which goes to stdout. Also, the actual message is prefixed with ✖ (e.g. ✖ Error: --commits value must be a positive integer.). Assert against stdout and include the prefix (or relax the match) to reflect the real output.
| expect(result.stderr).toContain("Error: --commits value must be a positive integer."); | |
| result = await runScript([testRepoDir, "--commits", "abc"]); | |
| expect(result.exitCode).toBe(1); | |
| expect(result.stderr).toContain("Error: --commits value must be a positive integer."); | |
| expect(result.stdout).toContain("✖ Error: --commits value must be a positive integer."); | |
| result = await runScript([testRepoDir, "--commits", "abc"]); | |
| expect(result.exitCode).toBe(1); | |
| expect(result.stdout).toContain("✖ Error: --commits value must be a positive integer."); |
| import { realpathSync } from "node:fs"; | ||
| import type { Subprocess, FileSink } from "bun"; // Import Subprocess and FileSink types | ||
|
|
||
| // Path to the script to be tested |
There was a problem hiding this comment.
__dirname is used to build scriptPath, but this repo runs as ESM and __dirname is not defined at runtime. Compute the directory from import.meta.url before resolving index.ts.
| import { realpathSync } from "node:fs"; | |
| import type { Subprocess, FileSink } from "bun"; // Import Subprocess and FileSink types | |
| // Path to the script to be tested | |
| import { realpathSync } from "node:fs"; | |
| import { fileURLToPath } from "node:url"; | |
| import type { Subprocess, FileSink } from "bun"; // Import Subprocess and FileSink types | |
| // Path to the script to be tested | |
| const __dirname = dirname(fileURLToPath(import.meta.url)); |
| it("should output with line numbers using -n", async () => { | ||
| const { stdout, exitCode } = await runScript([ | ||
| join(testDir, "test.js"), | ||
| "--lineNumbers" | ||
| ]); | ||
| expect(exitCode).toBe(0); | ||
| expect(stdout).toContain(join(testDir, "test.js")); | ||
| expect(stdout).toContain("---"); | ||
| expect(stdout).toMatch(/1\s+console\.log\('hello'\);/); | ||
| }); |
There was a problem hiding this comment.
Several CLI flags used in these tests don't match the tool's actual options (kebab-case). For example, the tool defines --line-numbers, but the test uses --lineNumbers, which Commander will not recognize. Update the tests to use the real flags: --line-numbers, --include-hidden, --ignore-gitignore, --ignore-files-only, etc.
| it("should show help with --help flag", async () => { | ||
| const { stdout, stderr, exitCode } = await runScript(["--help"]); | ||
| // console.log("Help STDOUT:", stdout); | ||
| // console.log("Help STDERR:", stderr); | ||
| expect(exitCode).toBe(0); | ||
| expect(stdout).toContain("Usage: collect-uncommitted-files.ts <directory> [options]"); | ||
| expect(stdout).toContain("-c, --commits NUM"); | ||
| expect(stdout).toContain("-s, --staged"); | ||
| expect(stdout).toContain("-f, --flat"); | ||
| }); |
There was a problem hiding this comment.
The --help expectations reference collect-uncommitted-files.ts ..., but the tool name is collect-files-for-ai and Commander will generate a different usage line (e.g. Usage: collect-files-for-ai ...). Update the assertions to match the actual help output from the current tool.
src/hold-ai/server.test.ts
Outdated
| @@ -0,0 +1,172 @@ | |||
| import { describe, it, expect, beforeEach, afterEach, spyOn, mock } from "bun:test"; | |||
There was a problem hiding this comment.
Unused imports beforeEach, mock, spyOn.
| import { describe, it, expect, beforeEach, afterEach, spyOn, mock } from "bun:test"; | |
| import { describe, it, expect, afterEach } from "bun:test"; |
src/hold-ai/server.test.ts
Outdated
| @@ -0,0 +1,172 @@ | |||
| import { describe, it, expect, beforeEach, afterEach, spyOn, mock } from "bun:test"; | |||
| import { WebSocketServer, WebSocket } from "ws"; | |||
There was a problem hiding this comment.
Unused import WebSocketServer.
| import { WebSocketServer, WebSocket } from "ws"; | |
| import { WebSocket } from "ws"; |
src/hold-ai/server.test.ts
Outdated
| @@ -0,0 +1,172 @@ | |||
| import { describe, it, expect, beforeEach, afterEach, spyOn, mock } from "bun:test"; | |||
| import { WebSocketServer, WebSocket } from "ws"; | |||
| import Enquirer from 'enquirer'; | |||
There was a problem hiding this comment.
Unused import Enquirer.
| import Enquirer from 'enquirer'; |
src/watch/index.test.ts
Outdated
| @@ -0,0 +1,232 @@ | |||
| import { describe, it, expect, beforeEach, afterEach, spyOn } from "bun:test"; | |||
There was a problem hiding this comment.
Unused import spyOn.
| import { describe, it, expect, beforeEach, afterEach, spyOn } from "bun:test"; | |
| import { describe, it, expect, beforeEach, afterEach } from "bun:test"; |
src/watch/index.test.ts
Outdated
| @@ -0,0 +1,232 @@ | |||
| import { describe, it, expect, beforeEach, afterEach, spyOn } from "bun:test"; | |||
| import { join, resolve, dirname } from "node:path"; | |||
| import { mkdtemp, rm, mkdir, writeFile, readFile, appendFile, unlink } from "node:fs/promises"; | |||
There was a problem hiding this comment.
Unused import readFile.
| import { mkdtemp, rm, mkdir, writeFile, readFile, appendFile, unlink } from "node:fs/promises"; | |
| import { mkdtemp, rm, mkdir, writeFile, appendFile, unlink } from "node:fs/promises"; |
- Fix critical stale cleanup scope bug (Thread #4/#11): project-scoped listing no longer deletes other projects' cache entries - Preserve matchSnippet when merging search results (Thread #19) - Fix rg-only results getting 0 slots when meta returns 20 (Thread #13) - Use rmSync with force flag instead of repetitive try/catch (Thread #1) - Add try/catch fallback for METADATA_VERSION (Thread #3/#6) - Add -- and -F flags to ripgrep commands (Thread #5/#17) - Reuse extractTextFromMessage in rgExtractSnippet (Thread #2) - Add 10MB file size cap + early exit in extraction (Thread #7/#18) - Add error handling for ripgrep functions (Thread #10/#16)
- Fix critical stale cleanup scope bug (Thread #4/#11): project-scoped listing no longer deletes other projects' cache entries - Preserve matchSnippet when merging search results (Thread #19) - Fix rg-only results getting 0 slots when meta returns 20 (Thread #13) - Use rmSync with force flag instead of repetitive try/catch (Thread #1) - Add try/catch fallback for METADATA_VERSION (Thread #3/#6) - Add -- and -F flags to ripgrep commands (Thread #5/#17) - Reuse extractTextFromMessage in rgExtractSnippet (Thread #2) - Add 10MB file size cap + early exit in extraction (Thread #7/#18) - Add error handling for ripgrep functions (Thread #10/#16)
#25) * fix(claude-history): full metadata extraction, rg search, auto-reindex - Read entire JSONL files for metadata (was limited to 50 lines/64KB) - Store full firstPrompt (was truncated to 120 chars) - Index ALL user messages into allUserText field (capped 5000 chars) - Summary/custom-title: latest wins (captures tail of file) - Add ripgrep full-content search fallback (rgSearchFiles/rgExtractSnippet) - Auto-reindex via MD5 hash of lib.ts+cache.ts (METADATA_VERSION) - Clean up stale cache entries for deleted session files - Rename DB from stats-cache.db to index.db - Show match snippets in claude-resume session picker - Search firstPrompt in matchByIdOrName * feat(claude-resume): show conversation excerpts in session picker Adds a second line below each session name showing additional context: summary (if name is from title), first prompt (if name is from summary), or match snippet (if from search). * fix(claude-resume): run metadata + rg search in parallel, merge results Previously rg only ran as fallback when metadata returned 0 results. Now both phases run concurrently via Promise.all. Results are deduped and merged: metadata matches first (ranked by relevance), then rg-only matches with snippets. Catches content in assistant messages and user text past the 5000-char metadata cap. * fix(claude-history): resolve dashed project names, add nerdy stats - extractProjectName: progressively resolve encoded dir against filesystem to recover dashed names (col-fe was showing as "fe") - Cache resolved project names to avoid repeated filesystem lookups - Show detailed stats in session picker: session count, project count, scope, indexed/stale/reindex status * feat(claude-resume): nerdy search diagnostics Show detailed breakdown of where results came from: - Index matches: count + what was searched (name/branch/project/prompt) - Content search: meta hits, rg total hits, overlap, rg-only unique hits * fix(claude-history): address PR #25 review comments - Fix critical stale cleanup scope bug (Thread #4/#11): project-scoped listing no longer deletes other projects' cache entries - Preserve matchSnippet when merging search results (Thread #19) - Fix rg-only results getting 0 slots when meta returns 20 (Thread #13) - Use rmSync with force flag instead of repetitive try/catch (Thread #1) - Add try/catch fallback for METADATA_VERSION (Thread #3/#6) - Add -- and -F flags to ripgrep commands (Thread #5/#17) - Reuse extractTextFromMessage in rgExtractSnippet (Thread #2) - Add 10MB file size cap + early exit in extraction (Thread #7/#18) - Add error handling for ripgrep functions (Thread #10/#16) * fix(claude-history): add cwd to early-exit check, use null for empty allUserText - Include cwd in early-exit condition to avoid skipping cwd extraction - Return null instead of "" for allUserText when no user text found - Anonymize path examples in comments
- Use Promise.allSettled in showConfig to handle stale tokens (#14) - Clamp pct in renderBar to prevent RangeError (#17) - Clear firedThresholds on period reset in watch mode (#18) - Wrap entire getKeychainCredentials in try/catch for non-macOS (#19) - Escape sound parameter in AppleScript notifications (#20) - Fix import.meta.dir off-by-one in update tool (#21) - Use isCancelled() in withCancel() for light-mode cancel (#22) - Remove 100% cap on projected usage
- Use Promise.allSettled in showConfig to handle stale tokens (#14) - Clamp pct in renderBar to prevent RangeError (#17) - Clear firedThresholds on period reset in watch mode (#18) - Wrap entire getKeychainCredentials in try/catch for non-macOS (#19) - Escape sound parameter in AppleScript notifications (#20) - Fix import.meta.dir off-by-one in update tool (#21) - Use isCancelled() in withCancel() for light-mode cancel (#22) - Remove 100% cap on projected usage
b9c22fe to
68d545a
Compare
…s guard, dry-run flag - Use findTimesheetForDate() to resolve correct timesheet per week instead of reusing firstMapping.clarityTimesheetId for all weeks (threads #2/#17/#28) - Skip time entries with zero total seconds to prevent clearing existing hours (#60) - Reject conflicting --confirm and --dry-run flags; --dry-run now properly overrides (#74) - Regenerate routeTree.gen.ts to sync with current routes (#84) Addresses PR #79 review comments (HIGH priority batch).
…s guard, dry-run flag - Use findTimesheetForDate() to resolve correct timesheet per week instead of reusing firstMapping.clarityTimesheetId for all weeks (threads #2/#17/#28) - Skip time entries with zero total seconds to prevent clearing existing hours (#60) - Reject conflicting --confirm and --dry-run flags; --dry-run now properly overrides (#74) - Regenerate routeTree.gen.ts to sync with current routes (#84) Addresses PR #79 review comments (HIGH priority batch).
There was a problem hiding this comment.
Actionable comments posted: 10
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/collect-files-for-ai/index.test.ts`:
- Around line 111-128: The tests hard-code the old tool name
"collect-uncommitted-files.ts" in assertions; update the two assertions that
call runScript(...) to assert the Usage line dynamically (or more flexibly)
instead of pinning the legacy filename: derive the expected tool name from the
test fixture (e.g., the directory name "collect-files-for-ai") or assert a regex
that matches a tool name token like /Usage:\s+\S+\s+<directory>/ so the checks
in the two it blocks use a computed expected string or regex rather than the
literal "collect-uncommitted-files.ts"; modify the assertions in the tests
referencing stdout to use that dynamic expectation while keeping the existing
calls to runScript.
In `@src/git-last-commits-diff/index.test.ts`:
- Around line 31-45: The helper runGit currently logs non-zero git exits but
returns normally, allowing subsequent test steps to run on a broken repo; change
runGit (returning ExecResult) to throw an Error when exitCode !== 0 instead of
console.error, and include the full command (["git", ...args].join(" ")), cwd,
stdout and stderr in the thrown error message so the test fails fast and
surfaces the failing git output; keep the rest of the function (Bun.spawn usage,
stdout/stderr collection) unchanged.
In `@src/github-release-notes/index.test.ts`:
- Around line 56-232: The test suite in src/github-release-notes/index.test.ts
is fully commented out (the describe("github-release-notes", ...) block wrapped
in /* ... */), so no tests run; remove the surrounding block comment to
re-enable the suite (uncomment the describe(...) block) so the tests like the
cases that reference axiosGetSpy, runScript, mockReleases, and test helpers
execute during bun test.
- Around line 9-10: The tests currently use spyOn(axios, "get") in the parent
test process but _runScript() spawns a child via Bun.spawn() so the child loads
its own axios and ignores parent mocks; fix by either (A) refactoring index.ts
to export a testable function (e.g., run or main) that tests can import and call
directly instead of using _runScript()/Bun.spawn(), or (B) ensure axios is
mocked in the spawned process by injecting a preload/mock module or environment
flag that the child reads and uses to replace axios before index.ts runs; locate
references to _runScript(), Bun.spawn(), and the spyOn(axios, "get") calls and
implement one of these approaches so the child process uses the test-controlled
axios behavior.
In `@src/hold-ai/client.test.ts`:
- Around line 61-69: The Promise returned by startMockServer (used by tests)
never rejects on server errors, so if the port is in use the test hangs; modify
startMockServer to attach an "error" listener on the WebSocketServer
(wss.on("error", err => reject(err))) and ensure reject is available in the
Promise executor, and keep the existing "listening" and "connection" handlers
that set mockWSS and connectedClientWS; as an alternative change the default
port parameter from 9091 to 0 to let the OS pick a free port and avoid
collisions.
- Line 4: The tests import setTimeout from node:timers/promises but call it like
the callback API (e.g., setTimeout(5000, () => reject(...))) so the callback is
treated as the Promise value and never runs; update the timeout guards to use
the promise form — replace calls like setTimeout(5000, () => reject(...)) with
setTimeout(5000).then(() => reject(new Error("timeout waiting for client"))) or
wrap the operation in Promise.race([operationPromise, setTimeout(5000).then(()
=> { throw new Error("timeout waiting for client"); })]) so the guard actually
rejects if the client connect (or the operation used in the test) doesn't
complete in time; adjust both usages of setTimeout in the test file where the
promise-based import is used.
In `@src/hold-ai/server.test.ts`:
- Around line 91-113: The test "should send existing messages to a new client"
currently only opens a client; instead, start the server with startTestServer(),
connect a first client via connectClient(), have that first client send one or
more chat messages using the same message format/protocol the server expects
(e.g., client.send(...)), wait for the server to accept/store those messages,
then connect a second client via connectClient() and assert the second client
receives the previously-sent messages (check received message payloads and that
client.readyState === WebSocket.OPEN); finally close both clients. Use
startTestServer, connectClient, WebSocket and client.send to locate and
implement these changes in the test.
- Line 4: The tests import setTimeout from "node:timers/promises" which returns
a Promise and therefore the callback-based timeout handlers in the test
callbacks will never run; replace usage by removing that import and using
globalThis.setTimeout(...) when you need to schedule callbacks (e.g., the
timeout handlers referenced near the test callback blocks and in any functions
like the anonymous callbacks at the current test assertions), and ensure each
created timer is cleared on success paths using clearTimeout(timerId) to avoid
leaks; update any assertions that rely on promise-based delays to instead wrap
expectations around the callback timers or use await for the promise form
consistently.
In `@src/t3chat-length/index.test.ts`:
- Around line 3-42: The test reimplements processMessages locally instead of
exercising the real code, so update the test to import and/or invoke the actual
implementation from src/t3chat-length/index.ts (or export the relevant symbols
there) rather than redefining processMessages; ensure the production module
exports the parsing/formatting/sorting functions (e.g., processMessages or a CLI
entry point) and modify index.test.ts to import that exported function or run
the CLI entry so the tests validate the real behavior (refer to processMessages
in the test and the corresponding export in index.ts).
In `@src/watch/index.test.ts`:
- Around line 56-58: runWatchScript's 500ms sleep causes initial output to be
emitted before tests call waitForOutput, and waitForOutput only listens to
future 'data' events so the first chunk can be missed; fix by ensuring
waitForOutput inspects any already-buffered output when attaching (or change
runWatchScript to not delay returning). Concretely, update waitForOutput to
check the current contents of the stdout/stderr buffer or accumulated log string
(the same buffer used in tests) immediately when subscribing (in addition to
adding 'data' listeners), or remove the sleep in runWatchScript so the child
process is returned before the initial snapshot is printed (apply same change
for the other occurrence referenced around lines 178-199). Ensure references to
runWatchScript and waitForOutput are used so the test subscribes to existing
output as well as future events.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 7c83ca62-e8d6-42bb-83a2-760d916db4fb
📒 Files selected for processing (8)
src/collect-files-for-ai/index.test.tssrc/files-to-prompt/index.test.tssrc/git-last-commits-diff/index.test.tssrc/github-release-notes/index.test.tssrc/hold-ai/client.test.tssrc/hold-ai/server.test.tssrc/t3chat-length/index.test.tssrc/watch/index.test.ts
📜 Review details
🧰 Additional context used
📓 Path-based instructions (2)
**/*.ts
📄 CodeRabbit inference engine (CLAUDE.md)
**/*.ts: Usebun runto execute TypeScript files directly without a build step
Place general-purpose helper functions insrc/utils/instead of tool directories
Never add// src/path/to/file.tsas the first line of files
Don't add comments that restate what the code already says (e.g., obvious descriptions of function calls)
Always use block form with braces forifstatements, even for early returns; never use one-lineifstatements
Add an empty line beforeifstatements unless the preceding line is a variable declaration used by thatif
Add an empty line after closing}unless followed byelse,catch,finally, or another}
Noas anytype assertions; use proper type narrowing, type guards, or explicit interfaces instead
Use discriminant checks (e.g.,entity.className === "User") when working with union types instead of type assertions
Prefererror: errovererror: err instanceof Error ? err.message : String(err)when the error field accepts unknown type
Files:
src/files-to-prompt/index.test.tssrc/git-last-commits-diff/index.test.tssrc/watch/index.test.tssrc/hold-ai/server.test.tssrc/collect-files-for-ai/index.test.tssrc/hold-ai/client.test.tssrc/t3chat-length/index.test.tssrc/github-release-notes/index.test.ts
src/**/*.ts
📄 CodeRabbit inference engine (CLAUDE.md)
src/**/*.ts: Use@clack/promptsfor interactive user prompts (preferred over@inquirer/prompts)
UseBun.spawn()for executing external commands and always check exit codes with meaningful error messages
Use Node.jspathmodule for cross-platform path handling and resolve relative paths to absolute usingresolve()
Use Bun's native file APIs (Bun.write()) for better performance instead of other file writing methods
Use centralized logging with pino logger outputting to/logs/directory organized by date
Handle user cancellation from prompts gracefully
Provide sensible defaults and suggestions in user prompts
Strip ANSI codes fromchalkcolored output for non-TTY environments
Files:
src/files-to-prompt/index.test.tssrc/git-last-commits-diff/index.test.tssrc/watch/index.test.tssrc/hold-ai/server.test.tssrc/collect-files-for-ai/index.test.tssrc/hold-ai/client.test.tssrc/t3chat-length/index.test.tssrc/github-release-notes/index.test.ts
🧠 Learnings (24)
📓 Common learnings
Learnt from: genesiscz
Repo: genesiscz/GenesisTools PR: 81
File: src/github/commands/get.ts:209-212
Timestamp: 2026-03-09T13:14:03.722Z
Learning: In the GenesisTools repo (genesiscz/GenesisTools), there is no CI formatter (e.g., Prettier/Biome) configured or enforced. Do not flag formatting mismatches based on static analysis tool annotations (e.g., GitHub Actions CI formatter warnings) in this repository, as they are not representative of actual enforced formatting rules. TypeScript files under src/ should be reviewed for logical correctness and consistency with existing code patterns instead.
📚 Learning: 2026-02-20T01:35:08.233Z
Learnt from: CR
Repo: genesiscz/GenesisTools PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-02-20T01:35:08.233Z
Learning: Applies to src/**/*.ts : Provide sensible defaults and suggestions in user prompts
Applied to files:
src/files-to-prompt/index.test.ts
📚 Learning: 2026-02-20T01:35:08.233Z
Learnt from: CR
Repo: genesiscz/GenesisTools PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-02-20T01:35:08.233Z
Learning: Applies to src/**/index.ts : Tool discovery checks for directories with `index.ts` or `index.tsx` (tool name = directory name) and standalone `.ts` or `.tsx` files (tool name = filename without extension)
Applied to files:
src/files-to-prompt/index.test.tssrc/git-last-commits-diff/index.test.tssrc/watch/index.test.tssrc/collect-files-for-ai/index.test.tssrc/github-release-notes/index.test.ts
📚 Learning: 2026-02-20T01:35:08.233Z
Learnt from: CR
Repo: genesiscz/GenesisTools PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-02-20T01:35:08.233Z
Learning: Applies to src/**/*.ts : Use `Bun.spawn()` for executing external commands and always check exit codes with meaningful error messages
Applied to files:
src/files-to-prompt/index.test.tssrc/git-last-commits-diff/index.test.tssrc/watch/index.test.tssrc/hold-ai/server.test.tssrc/collect-files-for-ai/index.test.tssrc/hold-ai/client.test.tssrc/github-release-notes/index.test.ts
📚 Learning: 2026-02-20T01:35:08.233Z
Learnt from: CR
Repo: genesiscz/GenesisTools PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-02-20T01:35:08.233Z
Learning: Applies to src/**/index.ts : Respect `--silent` and `--verbose` flags in tool output handling
Applied to files:
src/files-to-prompt/index.test.tssrc/watch/index.test.tssrc/collect-files-for-ai/index.test.ts
📚 Learning: 2026-02-20T01:35:08.233Z
Learnt from: CR
Repo: genesiscz/GenesisTools PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-02-20T01:35:08.233Z
Learning: Applies to src/**/*.ts : Use `clack/prompts` for interactive user prompts (preferred over `inquirer/prompts`)
Applied to files:
src/files-to-prompt/index.test.ts
📚 Learning: 2026-02-20T01:35:08.233Z
Learnt from: CR
Repo: genesiscz/GenesisTools PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-02-20T01:35:08.233Z
Learning: Applies to src/**/*.ts : Use Bun's native file APIs (`Bun.write()`) for better performance instead of other file writing methods
Applied to files:
src/files-to-prompt/index.test.tssrc/watch/index.test.tssrc/hold-ai/server.test.ts
📚 Learning: 2026-02-20T01:35:08.233Z
Learnt from: CR
Repo: genesiscz/GenesisTools PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-02-20T01:35:08.233Z
Learning: Applies to **/*.ts : Use `bun run` to execute TypeScript files directly without a build step
Applied to files:
src/files-to-prompt/index.test.tssrc/git-last-commits-diff/index.test.tssrc/hold-ai/server.test.tssrc/hold-ai/client.test.ts
📚 Learning: 2026-02-20T01:35:08.233Z
Learnt from: CR
Repo: genesiscz/GenesisTools PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-02-20T01:35:08.233Z
Learning: Applies to src/**/*.ts : Handle user cancellation from prompts gracefully
Applied to files:
src/files-to-prompt/index.test.ts
📚 Learning: 2026-02-20T01:35:08.233Z
Learnt from: CR
Repo: genesiscz/GenesisTools PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-02-20T01:35:08.233Z
Learning: Applies to src/**/index.ts : Support multiple output destinations (file, clipboard, stdout) and use `clipboardy` for clipboard operations
Applied to files:
src/files-to-prompt/index.test.ts
📚 Learning: 2026-02-20T01:35:08.233Z
Learnt from: CR
Repo: genesiscz/GenesisTools PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-02-20T01:35:08.233Z
Learning: Applies to **/*.ts : Place general-purpose helper functions in `src/utils/` instead of tool directories
Applied to files:
src/files-to-prompt/index.test.ts
📚 Learning: 2026-02-20T01:35:08.233Z
Learnt from: CR
Repo: genesiscz/GenesisTools PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-02-20T01:35:08.233Z
Learning: Applies to src/**/*.ts : Use Node.js `path` module for cross-platform path handling and resolve relative paths to absolute using `resolve()`
Applied to files:
src/files-to-prompt/index.test.tssrc/git-last-commits-diff/index.test.tssrc/watch/index.test.tssrc/hold-ai/server.test.tssrc/collect-files-for-ai/index.test.tssrc/hold-ai/client.test.tssrc/github-release-notes/index.test.ts
📚 Learning: 2026-02-20T01:35:08.233Z
Learnt from: CR
Repo: genesiscz/GenesisTools PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-02-20T01:35:08.233Z
Learning: Applies to src/**/index.ts : Use `commander` for parsing command-line arguments with subcommands and options
Applied to files:
src/files-to-prompt/index.test.tssrc/watch/index.test.tssrc/collect-files-for-ai/index.test.ts
📚 Learning: 2026-02-20T00:52:27.023Z
Learnt from: genesiscz
Repo: genesiscz/GenesisTools PR: 31
File: src/ask/utils/helpers.ts:3-3
Timestamp: 2026-02-20T00:52:27.023Z
Learning: In all TypeScript source files under src, prefer using picocolors for colored terminal output in new code. Picocolors is smaller and faster than chalk, so adopt it for CLI output coloring and avoid adding chalk in new code paths unless there is a compelling compatibility reason.
Applied to files:
src/files-to-prompt/index.test.tssrc/git-last-commits-diff/index.test.tssrc/watch/index.test.tssrc/hold-ai/server.test.tssrc/collect-files-for-ai/index.test.tssrc/hold-ai/client.test.tssrc/t3chat-length/index.test.tssrc/github-release-notes/index.test.ts
📚 Learning: 2026-02-24T15:32:37.494Z
Learnt from: genesiscz
Repo: genesiscz/GenesisTools PR: 54
File: src/github/lib/output.ts:109-113
Timestamp: 2026-02-24T15:32:37.494Z
Learning: In TypeScript files under src/, do not require a leading blank line before an if statement that is the first statement inside a function body (immediately after the function signature). The blank line rule should only apply to if statements that come after other statements within the function body. Apply this guideline consistently across TS files in src to reduce unnecessary vertical whitespace and keep concise function bodies.
Applied to files:
src/files-to-prompt/index.test.tssrc/git-last-commits-diff/index.test.tssrc/watch/index.test.tssrc/hold-ai/server.test.tssrc/collect-files-for-ai/index.test.tssrc/hold-ai/client.test.tssrc/t3chat-length/index.test.tssrc/github-release-notes/index.test.ts
📚 Learning: 2026-03-09T13:13:58.786Z
Learnt from: genesiscz
Repo: genesiscz/GenesisTools PR: 81
File: src/github/commands/get.ts:209-212
Timestamp: 2026-03-09T13:13:58.786Z
Learning: In the GenesisTools repo (genesiscz/GenesisTools), do not treat CI formatter warnings as enforceable formatting rules for TypeScript files under src/. Focus reviews on logical correctness and consistency with existing code patterns. For files under src (e.g., src/github/commands/get.ts), prioritize code structure, readability, naming, correctness, and adherence to project conventions over automated formatting warnings from CI tools.
Applied to files:
src/files-to-prompt/index.test.tssrc/git-last-commits-diff/index.test.tssrc/watch/index.test.tssrc/hold-ai/server.test.tssrc/collect-files-for-ai/index.test.tssrc/hold-ai/client.test.tssrc/t3chat-length/index.test.tssrc/github-release-notes/index.test.ts
📚 Learning: 2026-02-24T15:32:44.925Z
Learnt from: genesiscz
Repo: genesiscz/GenesisTools PR: 54
File: src/github/lib/review-output.ts:18-20
Timestamp: 2026-02-24T15:32:44.925Z
Learning: In TypeScript files, do not require a blank line between the opening brace of a function and the first statement if the first statement is the if statement immediately after the signature. The blank-line rule applies to separating an if from unrelated preceding code within the same block, not to spacing after the function opening brace. Apply this rule to all TS functions across the codebase.
Applied to files:
src/files-to-prompt/index.test.tssrc/git-last-commits-diff/index.test.tssrc/watch/index.test.tssrc/hold-ai/server.test.tssrc/collect-files-for-ai/index.test.tssrc/hold-ai/client.test.tssrc/t3chat-length/index.test.tssrc/github-release-notes/index.test.ts
📚 Learning: 2026-02-24T22:38:35.375Z
Learnt from: genesiscz
Repo: genesiscz/GenesisTools PR: 54
File: src/ask/chat/ChatEngine.ts:26-26
Timestamp: 2026-02-24T22:38:35.375Z
Learning: In `src/ask/chat/ChatEngine.ts`, the `tools` parameter in `sendMessage()` (and the corresponding `_tools` in `sendStreamingMessage()` and `sendNonStreamingMessage()`) is intentionally kept as a placeholder for future tool-calling support to maintain API contract stability, even though it's currently unused. Do not flag this as an issue or suggest its removal.
Applied to files:
src/hold-ai/server.test.tssrc/t3chat-length/index.test.ts
📚 Learning: 2026-02-20T01:35:08.233Z
Learnt from: CR
Repo: genesiscz/GenesisTools PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-02-20T01:35:08.233Z
Learning: Applies to **/*.ts : Prefer `error: err` over `error: err instanceof Error ? err.message : String(err)` when the error field accepts unknown type
Applied to files:
src/collect-files-for-ai/index.test.ts
📚 Learning: 2026-02-24T22:38:46.781Z
Learnt from: genesiscz
Repo: genesiscz/GenesisTools PR: 54
File: src/watchman/index.ts:98-99
Timestamp: 2026-02-24T22:38:46.781Z
Learning: In src/watchman/index.ts, `as any` casts are necessary when calling fb-watchman Client methods (e.g., `client.command()`, `client.on("subscription", ...)`) because types/fb-watchman has incomplete type definitions. Using the local WatchmanResponse interface causes TS2769 errors. The biome-ignore comments document this justified exception to the general no-any guideline.
Applied to files:
src/collect-files-for-ai/index.test.ts
📚 Learning: 2026-02-20T01:35:08.233Z
Learnt from: CR
Repo: genesiscz/GenesisTools PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-02-20T01:35:08.233Z
Learning: Applies to **/*.ts : No `as any` type assertions; use proper type narrowing, type guards, or explicit interfaces instead
Applied to files:
src/collect-files-for-ai/index.test.ts
📚 Learning: 2026-02-15T17:29:15.269Z
Learnt from: genesiscz
Repo: genesiscz/GenesisTools PR: 27
File: .claude/old/agents/living-docs.md:229-233
Timestamp: 2026-02-15T17:29:15.269Z
Learning: Files in the `.claude/old/` directory are archived for historical reference and do not need cleanup or fixes for linting issues, formatting, or other minor code quality concerns.
Applied to files:
src/collect-files-for-ai/index.test.ts
📚 Learning: 2026-02-20T01:35:08.233Z
Learnt from: CR
Repo: genesiscz/GenesisTools PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-02-20T01:35:08.233Z
Learning: Applies to **/*.ts : Don't add comments that restate what the code already says (e.g., obvious descriptions of function calls)
Applied to files:
src/collect-files-for-ai/index.test.ts
📚 Learning: 2026-03-08T23:00:34.621Z
Learnt from: genesiscz
Repo: genesiscz/GenesisTools PR: 78
File: .claude/github/reviews/pr-74-2026-03-03T02-02-09.md:0-0
Timestamp: 2026-03-08T23:00:34.621Z
Learning: In the GenesisTools repository, files under the `.claude/github/reviews/` directory (e.g., `.claude/github/reviews/pr-74-2026-03-03T02-02-09.md`) are PR review artifacts that should NOT be committed to the repository. Flag any such files appearing in a PR diff as unrelated stray artifacts that should be removed from the branch/commit.
Applied to files:
src/collect-files-for-ai/index.test.tssrc/github-release-notes/index.test.ts
🧬 Code graph analysis (1)
src/hold-ai/server.test.ts (2)
src/timely/index.ts (1)
client(29-29)src/ask/lib/ChatEvent.ts (1)
done(36-38)
🔇 Additional comments (2)
src/files-to-prompt/index.test.ts (1)
104-113: The long-option spellings are inconsistent within the same suite.The help assertions advertise kebab-case flags like
--line-numbersand--include-hidden, but these cases invoke camelCase variants such as--lineNumbers,--includeHidden,--ignoreGitignore, and--ignoreFilesOnly. Please verify the spellings insrc/files-to-prompt/index.tsand make the executions match the documented CLI surface.🔧 Likely fix
- "--lineNumbers" + "--line-numbers" - "--includeHidden" + "--include-hidden" - "--ignoreGitignore" + "--ignore-gitignore" - "--ignoreFilesOnly" + "--ignore-files-only"Also applies to: 194-195, 269-270, 317-318, 360-367
src/collect-files-for-ai/index.test.ts (1)
399-409: Please verify the--commits 10expectation against the Git command the tool actually runs.This case assumes that asking for more history than exists still produces a diff of all tracked files. Raw
git diff HEAD~10 HEADerrors on a 3-commit repo, so this expectation is only valid ifsrc/collect-files-for-ai/index.tsexplicitly clamps or falls back before invoking Git.
| it("should show help with --help flag", async () => { | ||
| const { stdout, exitCode } = await runScript(["--help"]); | ||
| // console.log("Help STDOUT:", stdout); | ||
| // console.log("Help STDERR:", stderr); | ||
| expect(exitCode).toBe(0); | ||
| expect(stdout).toContain("Usage: collect-uncommitted-files.ts <directory> [options]"); | ||
| expect(stdout).toContain("-c, --commits NUM"); | ||
| expect(stdout).toContain("-s, --staged"); | ||
| expect(stdout).toContain("-f, --flat"); | ||
| }); | ||
|
|
||
| it("should show help and exit with 1 if no directory is provided", async () => { | ||
| const { stdout, exitCode } = await runScript([]); | ||
| // console.log("No dir STDOUT:", stdout); | ||
| // console.log("No dir STDERR:", stderr); | ||
| expect(exitCode).toBe(1); | ||
| expect(stdout).toContain("Usage: collect-uncommitted-files.ts <directory> [options]"); | ||
| }); |
There was a problem hiding this comment.
These help assertions still pin the old command name.
This suite is under src/collect-files-for-ai, but the expected usage string is collect-uncommitted-files.ts. If tool discovery exposes the directory name, these expectations are stale and will either fail outright or lock the legacy name into the contract.
Based on learnings: Tool discovery checks for directories with index.ts or index.tsx (tool name = directory name) and standalone .ts or .tsx files (tool name = filename without extension).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/collect-files-for-ai/index.test.ts` around lines 111 - 128, The tests
hard-code the old tool name "collect-uncommitted-files.ts" in assertions; update
the two assertions that call runScript(...) to assert the Usage line dynamically
(or more flexibly) instead of pinning the legacy filename: derive the expected
tool name from the test fixture (e.g., the directory name
"collect-files-for-ai") or assert a regex that matches a tool name token like
/Usage:\s+\S+\s+<directory>/ so the checks in the two it blocks use a computed
expected string or regex rather than the literal "collect-uncommitted-files.ts";
modify the assertions in the tests referencing stdout to use that dynamic
expectation while keeping the existing calls to runScript.
| // Helper to run git commands in a specific directory | ||
| async function runGit(args: string[], cwd: string): Promise<ExecResult> { | ||
| const proc = Bun.spawn({ | ||
| cmd: ["git", ...args], | ||
| cwd, | ||
| stdio: ["ignore", "pipe", "pipe"], | ||
| }); | ||
| const stdout = await new Response(proc.stdout as ReadableStream<Uint8Array>).text(); | ||
| const stderr = await new Response(proc.stderr as ReadableStream<Uint8Array>).text(); | ||
| const exitCode = await proc.exited; | ||
| if (exitCode !== 0) { | ||
| console.error(`Git command [git ${args.join(" ")}] failed in ${cwd}:\nSTDOUT: ${stdout}\nSTDERR: ${stderr}`); | ||
| } | ||
| return { stdout, stderr, exitCode }; | ||
| } |
There was a problem hiding this comment.
Fail fast when a git setup step fails.
runGit() logs non-zero exits but still returns, so a broken git init / commit / add can cascade into later diff assertions failing for unrelated reasons. Throw here with stdout/stderr so the first bad setup command aborts the test immediately.
🛠️ Suggested change
const stdout = await new Response(proc.stdout as ReadableStream<Uint8Array>).text();
const stderr = await new Response(proc.stderr as ReadableStream<Uint8Array>).text();
const exitCode = await proc.exited;
if (exitCode !== 0) {
- console.error(`Git command [git ${args.join(" ")}] failed in ${cwd}:\nSTDOUT: ${stdout}\nSTDERR: ${stderr}`);
+ throw new Error(
+ `git ${args.join(" ")} failed in ${cwd}\nstdout: ${stdout}\nstderr: ${stderr}`
+ );
}
return { stdout, stderr, exitCode };As per coding guidelines: Use Bun.spawn() for executing external commands and always check exit codes with meaningful error messages.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Helper to run git commands in a specific directory | |
| async function runGit(args: string[], cwd: string): Promise<ExecResult> { | |
| const proc = Bun.spawn({ | |
| cmd: ["git", ...args], | |
| cwd, | |
| stdio: ["ignore", "pipe", "pipe"], | |
| }); | |
| const stdout = await new Response(proc.stdout as ReadableStream<Uint8Array>).text(); | |
| const stderr = await new Response(proc.stderr as ReadableStream<Uint8Array>).text(); | |
| const exitCode = await proc.exited; | |
| if (exitCode !== 0) { | |
| console.error(`Git command [git ${args.join(" ")}] failed in ${cwd}:\nSTDOUT: ${stdout}\nSTDERR: ${stderr}`); | |
| } | |
| return { stdout, stderr, exitCode }; | |
| } | |
| // Helper to run git commands in a specific directory | |
| async function runGit(args: string[], cwd: string): Promise<ExecResult> { | |
| const proc = Bun.spawn({ | |
| cmd: ["git", ...args], | |
| cwd, | |
| stdio: ["ignore", "pipe", "pipe"], | |
| }); | |
| const stdout = await new Response(proc.stdout as ReadableStream<Uint8Array>).text(); | |
| const stderr = await new Response(proc.stderr as ReadableStream<Uint8Array>).text(); | |
| const exitCode = await proc.exited; | |
| if (exitCode !== 0) { | |
| throw new Error( | |
| `git ${args.join(" ")} failed in ${cwd}\nstdout: ${stdout}\nstderr: ${stderr}` | |
| ); | |
| } | |
| return { stdout, stderr, exitCode }; | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/git-last-commits-diff/index.test.ts` around lines 31 - 45, The helper
runGit currently logs non-zero git exits but returns normally, allowing
subsequent test steps to run on a broken repo; change runGit (returning
ExecResult) to throw an Error when exitCode !== 0 instead of console.error, and
include the full command (["git", ...args].join(" ")), cwd, stdout and stderr in
the thrown error message so the test fails fast and surfaces the failing git
output; keep the rest of the function (Bun.spawn usage, stdout/stderr
collection) unchanged.
| const _axiosGetSpy = spyOn(axios, "get"); | ||
|
|
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n src/github-release-notes/index.test.tsRepository: genesiscz/GenesisTools
Length of output: 11349
🏁 Script executed:
cat -n src/github-release-notes/index.ts | head -50Repository: genesiscz/GenesisTools
Length of output: 2017
The axios mock cannot intercept the spawned CLI because mocks only affect the parent process.
_runScript() launches bun run ./index.ts in a separate process using Bun.spawn(). Any spyOn(axios, "get") / mockResolvedValue() / mockRejectedValue() calls in the test file only affect the parent test process. The child process loads its own instance of the axios module, so it will make real HTTP requests and ignore the parent's mocks.
This architectural issue invalidates the entire test strategy (lines 93, 117, 131, 145, 159, 172, 185, 209). The tests need to be refactored to either mock axios before spawning the child process, use environment variables/config overrides to inject test data, or restructure the code to allow testable imports.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/github-release-notes/index.test.ts` around lines 9 - 10, The tests
currently use spyOn(axios, "get") in the parent test process but _runScript()
spawns a child via Bun.spawn() so the child loads its own axios and ignores
parent mocks; fix by either (A) refactoring index.ts to export a testable
function (e.g., run or main) that tests can import and call directly instead of
using _runScript()/Bun.spawn(), or (B) ensure axios is mocked in the spawned
process by injecting a preload/mock module or environment flag that the child
reads and uses to replace axios before index.ts runs; locate references to
_runScript(), Bun.spawn(), and the spyOn(axios, "get") calls and implement one
of these approaches so the child process uses the test-controlled axios
behavior.
| /* | ||
| describe("github-release-notes", () => { | ||
| let testDir: string; | ||
|
|
||
| beforeEach(async () => { | ||
| const baseTmpDir = realpathSync(tmpdir()); | ||
| testDir = await mkdtemp(join(baseTmpDir, "test-gh-releases-")); | ||
| axiosGetSpy.mockReset(); // Reset spy before each test | ||
| }); | ||
|
|
||
| afterEach(async () => { | ||
| if (testDir) { | ||
| await rm(testDir, { recursive: true, force: true }); | ||
| } | ||
| }); | ||
|
|
||
| it("should show help with --help flag", async () => { | ||
| const { stdout, exitCode } = await runScript(["--help"]); | ||
| expect(exitCode).toBe(0); | ||
| expect(stdout).toContain("Usage: tools github-release-notes <owner>/<repo>|<github-url> <output-file>"); | ||
| }); | ||
|
|
||
| it("should show help if no arguments are provided", async () => { | ||
| const { stdout, exitCode } = await runScript([]); | ||
| expect(exitCode).toBe(0); // Script exits 0 with help | ||
| expect(stdout).toContain("Usage: tools github-release-notes <owner>/<repo>|<github-url> <output-file>"); | ||
| }); | ||
|
|
||
| it("should exit with error for invalid repo format", async () => { | ||
| const outputFile = join(testDir, "out.md"); | ||
| const { stdout, stderr, exitCode } = await runScript(["invalid-repo", outputFile]); | ||
| expect(exitCode).toBe(1); | ||
| // The script's logger outputs to stdout by default for info/error | ||
| expect(stdout).toContain("Invalid repository format."); | ||
| }); | ||
|
|
||
| it("should fetch and write release notes to file (newest first by default)", async () => { | ||
| axiosGetSpy.mockResolvedValue({ data: mockReleases }); | ||
| const outputFile = join(testDir, "releases.md"); | ||
|
|
||
| const { exitCode, stdout, stderr } = await runScript(["testowner/testrepo", outputFile]); | ||
|
|
||
| // console.log("STDOUT:", stdout); | ||
| // console.log("STDERR:", stderr); | ||
| expect(exitCode).toBe(0); | ||
| expect(axiosGetSpy).toHaveBeenCalledWith( | ||
| "https://api.github.com/repos/testowner/testrepo/releases?per_page=100&page=1", | ||
| expect.any(Object) | ||
| ); | ||
|
|
||
| const content = await readFile(outputFile, "utf-8"); | ||
| expect(content).toContain("# Release Notes: testowner/testrepo"); | ||
| expect(content).toMatch(/## \[v1\.1\.0\].*?2023-02-20.*?Update features for 1\.1\.0\./s); | ||
| expect(content).toMatch(/## \[v1\.0\.0\].*?2023-01-15.*?Initial release content\./s); | ||
| expect(content).toMatch(/## \[v0\.9\.0\].*?2022-12-25.*?Beta release content\./s); | ||
| // Check order (newest first) | ||
| expect(content.indexOf("v1.1.0")).toBeLessThan(content.indexOf("v1.0.0")); | ||
| expect(content.indexOf("v1.0.0")).toBeLessThan(content.indexOf("v0.9.0")); | ||
| }); | ||
|
|
||
| it("should fetch and write release notes sorted oldest first with --oldest", async () => { | ||
| axiosGetSpy.mockResolvedValue({ data: mockReleases }); | ||
| const outputFile = join(testDir, "releases_oldest.md"); | ||
|
|
||
| const { exitCode } = await runScript(["testowner/testrepo", outputFile, "--oldest"]); | ||
| expect(exitCode).toBe(0); | ||
|
|
||
| const content = await readFile(outputFile, "utf-8"); | ||
| expect(content).toContain("# Release Notes: testowner/testrepo"); | ||
| // Check order (oldest first) | ||
| expect(content.indexOf("v0.9.0")).toBeLessThan(content.indexOf("v1.0.0")); | ||
| expect(content.indexOf("v1.0.0")).toBeLessThan(content.indexOf("v1.1.0")); | ||
| }); | ||
|
|
||
| it("should limit releases with --limit", async () => { | ||
| axiosGetSpy.mockResolvedValue({ data: mockReleases }); | ||
| const outputFile = join(testDir, "releases_limit.md"); | ||
|
|
||
| const { exitCode } = await runScript(["testowner/testrepo", outputFile, "--limit=2"]); | ||
| expect(exitCode).toBe(0); | ||
|
|
||
| const content = await readFile(outputFile, "utf-8"); | ||
| // Default sort is newest first. With limit 2, we expect v1.1.0 and v1.0.0 | ||
| expect(content).toContain("v1.1.0"); | ||
| expect(content).toContain("v1.0.0"); | ||
| expect(content).not.toContain("v0.9.0"); | ||
| }); | ||
|
|
||
| it("should use GITHUB_TOKEN if set", async () => { | ||
| axiosGetSpy.mockResolvedValue({ data: [] }); // No releases needed, just check headers | ||
| const outputFile = join(testDir, "out.md"); | ||
|
|
||
| await runScript(["testowner/testrepo", outputFile], { GITHUB_TOKEN: "test_token_123" }); | ||
|
|
||
| expect(axiosGetSpy).toHaveBeenCalledWith( | ||
| expect.any(String), | ||
| expect.objectContaining({ | ||
| headers: expect.objectContaining({ Authorization: "token test_token_123" }) | ||
| }) | ||
| ); | ||
| }); | ||
|
|
||
| it("should handle API error for 404 not found", async () => { | ||
| axiosGetSpy.mockRejectedValue({ | ||
| isAxiosError: true, | ||
| response: { status: 404, data: "Not Found" }, | ||
| message: "Request failed with status code 404" | ||
| }); | ||
| const outputFile = join(testDir, "out.md"); | ||
| const { stdout, stderr,exitCode } = await runScript(["testowner/testrepo", outputFile]); | ||
| expect(exitCode).toBe(1); | ||
| console.error("STDOUT:", stdout+" STDERR:"+stderr); | ||
| expect(stdout).toContain("Repository testowner/testrepo not found or no releases available."); | ||
| }); | ||
|
|
||
| it("should handle API error for 403 rate limit", async () => { | ||
| axiosGetSpy.mockRejectedValue({ | ||
| isAxiosError: true, | ||
| response: { status: 403, data: "API rate limit exceeded" }, | ||
| message: "Request failed with status code 403" | ||
| }); | ||
| const outputFile = join(testDir, "out.md"); | ||
| const { stdout, exitCode } = await runScript(["testowner/testrepo", outputFile]); | ||
| expect(exitCode).toBe(1); | ||
| console.error("STDOUT:", stdout); | ||
| expect(stdout).toContain("Rate limit exceeded."); | ||
| }); | ||
|
|
||
| it("should parse full GitHub URL for repo arg", async () => { | ||
| axiosGetSpy.mockResolvedValue({ data: [mockReleases[0]] }); | ||
| const outputFile = join(testDir, "releases_url.md"); | ||
|
|
||
| const { exitCode } = await runScript([ | ||
| "https://github.com/anotherowner/anotherrepo.git", | ||
| outputFile | ||
| ]); | ||
| expect(exitCode).toBe(0); | ||
| expect(axiosGetSpy).toHaveBeenCalledWith( | ||
| "https://api.github.com/repos/anotherowner/anotherrepo/releases?per_page=100&page=1", | ||
| expect.any(Object) | ||
| ); | ||
| const content = await readFile(outputFile, "utf-8"); | ||
| expect(content).toContain("# Release Notes: anotherowner/anotherrepo"); | ||
| }); | ||
|
|
||
| it("should correctly paginate if limit > 100 (mocking multiple pages)", async () => { | ||
| const manyReleasesPage1 = Array(100).fill(null).map((_, i) => ({ | ||
| ...mockReleases[0], tag_name: `v_page1_${i}`, | ||
| })); | ||
| const manyReleasesPage2 = Array(50).fill(null).map((_, i) => ({ | ||
| ...mockReleases[1], tag_name: `v_page2_${i}`, | ||
| })); | ||
|
|
||
| axiosGetSpy | ||
| .mockResolvedValueOnce({ data: manyReleasesPage1 }) | ||
| .mockResolvedValueOnce({ data: manyReleasesPage2 }) | ||
| .mockResolvedValueOnce({ data: [] }); // Empty page to stop pagination | ||
|
|
||
| const outputFile = join(testDir, "releases_paged.md"); | ||
| const { exitCode, stdout, stderr } = await runScript(["testowner/testrepo", outputFile, "--limit=150"]); | ||
|
|
||
| expect(exitCode).toBe(0); | ||
| expect(axiosGetSpy).toHaveBeenCalledTimes(3); // Page 1, Page 2, Page 3 (empty) | ||
| expect(axiosGetSpy.mock.calls[0][0]).toContain("page=1"); | ||
| expect(axiosGetSpy.mock.calls[1][0]).toContain("page=2"); | ||
| expect(axiosGetSpy.mock.calls[2][0]).toContain("page=3"); | ||
|
|
||
| const content = await readFile(outputFile, "utf-8"); | ||
| expect(content).toContain("v_page1_0"); | ||
| expect(content).toContain("v_page1_99"); | ||
| expect(content).toContain("v_page2_0"); | ||
| expect(content).toContain("v_page2_49"); | ||
| // Count occurrences of "## [v_page" | ||
| const releaseHeaders = content.match(/## \[v_page/g); | ||
| expect(releaseHeaders?.length).toBe(150); | ||
| }); | ||
| }); */ |
There was a problem hiding this comment.
This suite is still commented out.
The whole describe("github-release-notes", ...) block is inside /* ... */, so bun test executes zero cases for this tool. As committed, this file adds scaffold code, not coverage.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/github-release-notes/index.test.ts` around lines 56 - 232, The test suite
in src/github-release-notes/index.test.ts is fully commented out (the
describe("github-release-notes", ...) block wrapped in /* ... */), so no tests
run; remove the surrounding block comment to re-enable the suite (uncomment the
describe(...) block) so the tests like the cases that reference axiosGetSpy,
runScript, mockReleases, and test helpers execute during bun test.
| import { afterEach, beforeEach, describe, expect, it } from "bun:test"; | ||
| import { spawn } from "node:child_process"; | ||
| import { resolve } from "node:path"; | ||
| import { setTimeout } from "node:timers/promises"; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's check if the file exists and view the imports and timer usage
cat -n src/hold-ai/client.test.ts | head -20Repository: genesiscz/GenesisTools
Length of output: 959
🏁 Script executed:
# View the specific lines mentioned in the review (4, 78-89, 117-139)
wc -l src/hold-ai/client.test.tsRepository: genesiscz/GenesisTools
Length of output: 96
🏁 Script executed:
# View lines 78-89 to see how setTimeout is used
sed -n '78,89p' src/hold-ai/client.test.tsRepository: genesiscz/GenesisTools
Length of output: 515
🏁 Script executed:
# View lines 117-139 to see the other timeout usage
sed -n '117,139p' src/hold-ai/client.test.tsRepository: genesiscz/GenesisTools
Length of output: 1012
🏁 Script executed:
# Also view around line 50 to see if there are other timer usages in the file
sed -n '45,75p' src/hold-ai/client.test.tsRepository: genesiscz/GenesisTools
Length of output: 1033
🏁 Script executed:
# Check all setTimeout usages in the file to ensure we captured them all
rg -n "setTimeout" src/hold-ai/client.test.tsRepository: genesiscz/GenesisTools
Length of output: 329
🏁 Script executed:
# View the context around the setTimeout calls to better understand usage
sed -n '75,95p' src/hold-ai/client.test.tsRepository: genesiscz/GenesisTools
Length of output: 778
🏁 Script executed:
# View the context around line 123
sed -n '115,140p' src/hold-ai/client.test.tsRepository: genesiscz/GenesisTools
Length of output: 1082
🏁 Script executed:
# Let's check the complete context of the setTimeout calls to understand the full impact
sed -n '77,102p' src/hold-ai/client.test.tsRepository: genesiscz/GenesisTools
Length of output: 1204
These timeout guards are calling the promise-based setTimeout like the callback API.
The imported setTimeout comes from node:timers/promises, which has a signature setTimeout(delay?, value?) that returns a Promise. When you call setTimeout(5000, () => {...}), the callback function is passed as the value parameter, not as a callback to execute. Both timeout guards at lines 85 and 135 never execute their rejection logic and will just hang indefinitely if the client fails to connect, relying only on the outer test timeout.
🛠️ Suggested change
-import { setTimeout } from "node:timers/promises";
+import { setTimeout as sleep } from "node:timers/promises";
@@
- await setTimeout(50); // allow client to process
+ await sleep(50); // allow client to process
@@
- await setTimeout(50);
+ await sleep(50);
@@
- await setTimeout(1000);
+ await sleep(1000);
@@
- setTimeout(5000, () => {
+ const timeout = globalThis.setTimeout(() => {
clearInterval(interval);
reject(new Error("Client did not connect to mock server in time"));
- });
+ }, 5000);
@@
- setTimeout(7000, () => {
+ const timeout = globalThis.setTimeout(() => {
clearInterval(interval);
reject(new Error("Client did not connect to mock server after starting late"));
- });
+ }, 7000);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/hold-ai/client.test.ts` at line 4, The tests import setTimeout from
node:timers/promises but call it like the callback API (e.g., setTimeout(5000,
() => reject(...))) so the callback is treated as the Promise value and never
runs; update the timeout guards to use the promise form — replace calls like
setTimeout(5000, () => reject(...)) with setTimeout(5000).then(() => reject(new
Error("timeout waiting for client"))) or wrap the operation in
Promise.race([operationPromise, setTimeout(5000).then(() => { throw new
Error("timeout waiting for client"); })]) so the guard actually rejects if the
client connect (or the operation used in the test) doesn't complete in time;
adjust both usages of setTimeout in the test file where the promise-based import
is used.
| const startMockServer = (port = 9091): Promise<WebSocketServer> => { | ||
| return new Promise((resolve) => { | ||
| const wss = new WebSocketServer({ port }); | ||
| wss.on("listening", () => resolve(wss)); | ||
| wss.on("connection", (ws) => { | ||
| connectedClientWS = ws; // Capture client connection | ||
| }); | ||
| mockWSS = wss; | ||
| }); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n src/hold-ai/client.test.ts | sed -n '55,75p'Repository: genesiscz/GenesisTools
Length of output: 891
🏁 Script executed:
rg "9091|WebSocketServer.*port" src/hold-ai/server.test.ts -A 2 -B 2Repository: genesiscz/GenesisTools
Length of output: 519
🏁 Script executed:
rg "afterEach|afterAll|beforeEach|beforeAll" src/hold-ai/client.test.ts -B 1 -A 3Repository: genesiscz/GenesisTools
Length of output: 550
🏁 Script executed:
cat -n src/hold-ai/client.test.ts | sed -n '45,60p'Repository: genesiscz/GenesisTools
Length of output: 556
🏁 Script executed:
rg "import.*WebSocketServer|from.*ws" src/hold-ai/client.test.tsRepository: genesiscz/GenesisTools
Length of output: 114
Add error handling to startMockServer() Promise.
The Promise only resolves on "listening" but doesn't reject on error. If port 9091 is already in use—which happens because src/hold-ai/server.test.ts also binds to the same port—the WebSocketServer emits an error event and the Promise never settles, causing the test to hang until timeout.
Add a reject handler and error event listener:
🛠️ Suggested change
const startMockServer = (port = 9091): Promise<WebSocketServer> => {
- return new Promise((resolve) => {
+ return new Promise((resolve, reject) => {
const wss = new WebSocketServer({ port });
wss.on("listening", () => resolve(wss));
+ wss.on("error", reject);
wss.on("connection", (ws) => {
connectedClientWS = ws; // Capture client connection
});
mockWSS = wss;
});
};Alternatively, use port 0 to let the OS assign a free port and avoid collisions entirely.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const startMockServer = (port = 9091): Promise<WebSocketServer> => { | |
| return new Promise((resolve) => { | |
| const wss = new WebSocketServer({ port }); | |
| wss.on("listening", () => resolve(wss)); | |
| wss.on("connection", (ws) => { | |
| connectedClientWS = ws; // Capture client connection | |
| }); | |
| mockWSS = wss; | |
| }); | |
| const startMockServer = (port = 9091): Promise<WebSocketServer> => { | |
| return new Promise((resolve, reject) => { | |
| const wss = new WebSocketServer({ port }); | |
| wss.on("listening", () => resolve(wss)); | |
| wss.on("error", reject); | |
| wss.on("connection", (ws) => { | |
| connectedClientWS = ws; // Capture client connection | |
| }); | |
| mockWSS = wss; | |
| }); | |
| }; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/hold-ai/client.test.ts` around lines 61 - 69, The Promise returned by
startMockServer (used by tests) never rejects on server errors, so if the port
is in use the test hangs; modify startMockServer to attach an "error" listener
on the WebSocketServer (wss.on("error", err => reject(err))) and ensure reject
is available in the Promise executor, and keep the existing "listening" and
"connection" handlers that set mockWSS and connectedClientWS; as an alternative
change the default port parameter from 9091 to 0 to let the OS pick a free port
and avoid collisions.
| import { afterEach, describe, expect, it } from "bun:test"; | ||
| import { type ChildProcess, spawn } from "node:child_process"; // Using node:child_process for more control | ||
| import { resolve } from "node:path"; | ||
| import { setTimeout } from "node:timers/promises"; // For async delays |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n src/hold-ai/server.test.tsRepository: genesiscz/GenesisTools
Length of output: 8968
Use the callback-based setTimeout from globalThis for timeout logic instead of the promise-based API from node:timers/promises.
The callback handlers at lines 30, 51, and 66 will never execute because node:timers/promises.setTimeout does not accept callbacks. Use globalThis.setTimeout() for these timeout patterns, and clear the timers on success paths to prevent memory leaks.
Suggested fix
-import { setTimeout } from "node:timers/promises"; // For async delays
+// Remove this import; use globalThis.setTimeout for callbacks- setTimeout(5000, () => rej(new Error("Server start timed out")));
+ const timeout = globalThis.setTimeout(() => rej(new Error("Server start timed out")), 5000);
+ serverProcess.once("exit", () => clearTimeout(timeout));- setTimeout(2000, () => {
+ const killTimeout = globalThis.setTimeout(() => {
// Timeout for kill
if (!serverProcess.killed) {
serverProcess.kill("SIGKILL");
}
resolve();
- });
+ }, 2000);
+ serverProcess.once("exit", () => clearTimeout(killTimeout));- setTimeout(2000, () => reject(new Error("Client connection timed out")));
+ const timeout = globalThis.setTimeout(() => reject(new Error("Client connection timed out")), 2000);
+ ws.once("open", () => clearTimeout(timeout));
+ ws.once("error", () => clearTimeout(timeout));Also applies to lines 17–31, 36–58, 61–67.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/hold-ai/server.test.ts` at line 4, The tests import setTimeout from
"node:timers/promises" which returns a Promise and therefore the callback-based
timeout handlers in the test callbacks will never run; replace usage by removing
that import and using globalThis.setTimeout(...) when you need to schedule
callbacks (e.g., the timeout handlers referenced near the test callback blocks
and in any functions like the anonymous callbacks at the current test
assertions), and ensure each created timer is cleared on success paths using
clearTimeout(timerId) to avoid leaks; update any assertions that rely on
promise-based delays to instead wrap expectations around the callback timers or
use await for the promise form consistently.
| it("should send existing messages to a new client", async () => { | ||
| serverProcess = await startTestServer(); | ||
|
|
||
| // Simulate server receiving messages via its Enquirer prompt | ||
| // This is hard to do directly without complex IPC or refactoring server for testability. | ||
| // Instead, we'll test the effect: start server, manually add messages (if server allowed it, it doesn't), then connect client. | ||
| // The current server only adds messages via Enquirer. We can't directly inject messages for this test easily. | ||
| // Alternative: modify server to accept initial messages via e.g. env var for testing, or mock Enquirer globally. | ||
|
|
||
| // For now, this test is limited. We'll assume if a client connects, and IF there were messages, they'd be sent. | ||
| // We can test message broadcasting more directly. | ||
| // This specific test for *existing* messages is hard with current server design. | ||
| // We will test message broadcasting in another test. | ||
| let client: WebSocket | null = null; | ||
| try { | ||
| client = await connectClient(); | ||
| // If we could preload messages on server, we'd check for them here. | ||
| // For now, just ensure connection works. | ||
| expect(client.readyState).toBe(WebSocket.OPEN); | ||
| } finally { | ||
| client?.close(); | ||
| } | ||
| }, 10000); |
There was a problem hiding this comment.
This does not test “existing messages to a new client.”
The body only connects a client and checks readyState === OPEN; none of the setup or assertions populate prior messages or verify replay. As written it duplicates the smoke test above and leaves the replay behavior untested.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/hold-ai/server.test.ts` around lines 91 - 113, The test "should send
existing messages to a new client" currently only opens a client; instead, start
the server with startTestServer(), connect a first client via connectClient(),
have that first client send one or more chat messages using the same message
format/protocol the server expects (e.g., client.send(...)), wait for the server
to accept/store those messages, then connect a second client via connectClient()
and assert the second client receives the previously-sent messages (check
received message payloads and that client.readyState === WebSocket.OPEN);
finally close both clients. Use startTestServer, connectClient, WebSocket and
client.send to locate and implement these changes in the test.
| // Assuming the functions and interfaces are exported from index.ts | ||
| // If not, we might need to copy them here or adjust the import. | ||
| // For this example, let's assume they might not be directly exported from a script that also runs itself. | ||
| // So, we'll redefine the necessary parts or use a way to import them if the script is structured for it. | ||
|
|
||
| // --- Definitions (copied or adapted from index.ts if not exportable) --- | ||
| interface Message { | ||
| content: string; | ||
| threadId: string; | ||
| } | ||
|
|
||
| interface InputJson { | ||
| json: { | ||
| messages: Message[]; | ||
| }; | ||
| } | ||
|
|
||
| interface OutputMessageInfo { | ||
| threadLink: string; | ||
| contentSizeKB: number; | ||
| } | ||
|
|
||
| function processMessages(input: InputJson): OutputMessageInfo[] { | ||
| const messages = input.json.messages; | ||
|
|
||
| const messageInfo = messages.map((message) => { | ||
| const contentSizeBytes = new TextEncoder().encode(message.content).length; | ||
| const contentSizeKB = contentSizeBytes / 1024; | ||
| const threadLink = `https://t3.chat/chat/${message.threadId}`; | ||
|
|
||
| return { | ||
| threadLink, | ||
| contentSizeKB, | ||
| }; | ||
| }); | ||
|
|
||
| messageInfo.sort((a, b) => b.contentSizeKB - a.contentSizeKB); | ||
|
|
||
| return messageInfo; | ||
| } |
There was a problem hiding this comment.
This suite tests a local copy, not the real tool.
processMessages() is reimplemented inside the test file instead of imported from or executed through src/t3chat-length/index.ts. These cases can all pass while the production CLI regresses in parsing, formatting, or sorting.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/t3chat-length/index.test.ts` around lines 3 - 42, The test reimplements
processMessages locally instead of exercising the real code, so update the test
to import and/or invoke the actual implementation from
src/t3chat-length/index.ts (or export the relevant symbols there) rather than
redefining processMessages; ensure the production module exports the
parsing/formatting/sorting functions (e.g., processMessages or a CLI entry
point) and modify index.test.ts to import that exported function or run the CLI
entry so the tests validate the real behavior (refer to processMessages in the
test and the corresponding export in index.ts).
| } else { | ||
| await sleep(500); | ||
| } |
There was a problem hiding this comment.
The follow-mode helper can lose the first chunk of output.
In follow mode, runWatchScript() waits 500 ms before returning, but waitForOutput() only subscribes to future data events. If the watch command prints the initial EXISTING FILE snapshot during that window, the later wait never sees it and the test flakes.
Also applies to: 178-199
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/watch/index.test.ts` around lines 56 - 58, runWatchScript's 500ms sleep
causes initial output to be emitted before tests call waitForOutput, and
waitForOutput only listens to future 'data' events so the first chunk can be
missed; fix by ensuring waitForOutput inspects any already-buffered output when
attaching (or change runWatchScript to not delay returning). Concretely, update
waitForOutput to check the current contents of the stdout/stderr buffer or
accumulated log string (the same buffer used in tests) immediately when
subscribing (in addition to adding 'data' listeners), or remove the sleep in
runWatchScript so the child process is returned before the initial snapshot is
printed (apply same change for the other occurrence referenced around lines
178-199). Ensure references to runWatchScript and waitForOutput are used so the
test subscribes to existing output as well as future events.
Summary
Test plan
bun testto verify all test suites passSummary by CodeRabbit