-
Notifications
You must be signed in to change notification settings - Fork 1
Run multiple handles in parallel #233
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
AgustinSilverfin
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I haven't tested it myself but in general it looks good 👍
WalkthroughAdds variadic support for handles and account templates in the Run Liquid Test CLI, makes the CLI action async with a guard blocking multiple templates unless --status is used, adapts runTestsStatusOnly to accept and process multiple handles in parallel and returns aggregated results, and bumps package version to 1.48.0. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
📜 Recent review detailsConfiguration used: CodeRabbit UI Review profile: CHILL Plan: Pro 📒 Files selected for processing (1)
🔇 Additional comments (6)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
♻️ Duplicate comments (1)
lib/liquidTestRunner.js (1)
457-465: Consider addressing previous review feedback on formatting and log level.Based on past review comments:
- Line 463 uses
consola.logfor failed test names, but a reviewer suggested usingconsola.errorfor consistency since these are failures.- A reviewer suggested formatting failed tests inline (e.g.,
handle_2: FAILED [unit_test_1, unit_test_2]) rather than on separate indented lines, which would make the output easier to parse programmatically.Option 1: Use consola.error for failures
} else { - consola.log(`${singleHandle}: ${status}`); + consola.error(`${singleHandle}: ${status}`); // Display failed test names failedTestNames.forEach((testName) => { - consola.log(` ${testName}: FAILED`); + consola.error(` ${testName}: FAILED`); }); }Option 2: Inline formatting for easier parsing
if (status === "PASSED") { consola.log(`${singleHandle}: ${status}`); } else { - consola.log(`${singleHandle}: ${status}`); - // Display failed test names - failedTestNames.forEach((testName) => { - consola.log(` ${testName}: FAILED`); - }); + const failedList = failedTestNames.length > 0 ? ` [${failedTestNames.join(', ')}]` : ''; + consola.error(`${singleHandle}: ${status}${failedList}`); }Based on learnings, addressing reviewer feedback is important for collaboration.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
bin/cli.js(1 hunks)lib/liquidTestRunner.js(1 hunks)package.json(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (2)
lib/liquidTestRunner.js (1)
bin/cli.js (2)
templateType(468-468)testName(502-502)
bin/cli.js (3)
lib/liquidTestRunner.js (1)
options(21-21)lib/cli/utils.js (1)
options(89-89)lib/cli/devMode.js (1)
liquidTestRunner(4-4)
🔇 Additional comments (4)
package.json (1)
3-3: LGTM! Version bump aligns with the new multi-handle feature.The version update to 1.48.0 is appropriate for this new functionality.
bin/cli.js (3)
454-455: LGTM! Variadic options correctly enable multiple handles/templates.The use of
<handle...>and<name...>with Commander.js properly enables multiple values while maintaining backward compatibility for single values.
476-480: Status mode correctly passes array to parallel processor.The implementation properly awaits
runTestsStatusOnlywith the fulltemplateNamearray, enabling parallel execution for multiple handles.
482-490: Non-status mode correctly extracts single template.The extraction of
singleTemplateNamefromtemplateName[0]is appropriate for backward compatibility with single-template execution, and the preview validation logic is preserved.
| // RETURN (AND LOG) ONLY PASSED OR FAILED | ||
| // CAN BE USED BY GITHUB ACTIONS | ||
| async function runTestsStatusOnly(firmId, templateType, handle, testName = "") { | ||
| async function runTestsStatusOnly(firmId, templateType, handles, testName = "") { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Add input validation for handles parameter.
The function now accepts handles as an array, but there's no validation to ensure it's actually an array or that it's not empty. If an invalid value is passed, it could cause runtime errors.
Apply this diff to add validation:
async function runTestsStatusOnly(firmId, templateType, handles, testName = "") {
+ if (!Array.isArray(handles) || handles.length === 0) {
+ consola.error(`At least one handle must be provided`);
+ process.exit(1);
+ }
+
if (templateType !== "reconciliationText" && templateType !== "accountTemplate") {
consola.error(`Template type is missing or invalid`);
process.exit(1);
}🤖 Prompt for AI Agents
In lib/liquidTestRunner.js around line 424, the runTestsStatusOnly function
accepts a handles parameter but lacks validation; add a check at the top that
ensures handles is an Array and not empty (e.g., if (!Array.isArray(handles) ||
handles.length === 0) throw new TypeError("handles must be a non-empty array")
or return a rejected Promise), so invalid inputs are rejected early with a clear
error; update any callers or tests if they rely on implicit behavior.
| const runSingleHandle = async (singleHandle) => { | ||
| let status = "FAILED"; | ||
| const failedTestNames = []; | ||
| const testResult = await runTests(firmId, templateType, singleHandle, testName, false, "none"); | ||
|
|
||
| if (!testResult) { | ||
| status = "PASSED"; | ||
| consola.success(status); | ||
| return status; | ||
| } | ||
| if (!testResult) { | ||
| status = "PASSED"; | ||
| } else { | ||
| const testRun = testResult?.testRun; | ||
|
|
||
| const testRun = testResult?.testRun; | ||
| if (testRun && testRun?.status === "completed") { | ||
| const errorsPresent = checkAllTestsErrorsPresent(testRun.tests); | ||
| if (errorsPresent === false) { | ||
| status = "PASSED"; | ||
| } else { | ||
| // Extract failed test names | ||
| const testNames = Object.keys(testRun.tests).sort(); | ||
| testNames.forEach((testName) => { | ||
| const testErrorsPresent = checkTestErrorsPresent(testName, testRun.tests); | ||
| if (testErrorsPresent) { | ||
| failedTestNames.push(testName); | ||
| } | ||
| }); | ||
| } | ||
| } | ||
| } | ||
|
|
||
| if (testRun && testRun?.status === "completed") { | ||
| const errorsPresent = checkAllTestsErrorsPresent(testRun.tests); | ||
| if (errorsPresent === false) { | ||
| status = "PASSED"; | ||
| consola.success(status); | ||
| return status; | ||
| if (status === "PASSED") { | ||
| consola.log(`${singleHandle}: ${status}`); | ||
| } else { | ||
| consola.log(`${singleHandle}: ${status}`); | ||
| // Display failed test names | ||
| failedTestNames.forEach((testName) => { | ||
| consola.log(` ${testName}: FAILED`); | ||
| }); | ||
| } | ||
| } | ||
| consola.error(status); | ||
| return status; | ||
|
|
||
| return { handle: singleHandle, status, failedTestNames }; | ||
| }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Risk: Promise.all will abort all handles if one throws an exception.
If any single handle encounters an unhandled exception (not a test failure, but an actual error like network timeout, invalid YAML, etc.), Promise.all will reject immediately and the remaining handles won't complete or report their results. This defeats the purpose of parallel execution for CI/CD pipelines where you want to see results for all handles even if some fail.
Consider wrapping each handle execution to catch exceptions:
const runSingleHandle = async (singleHandle) => {
+ try {
let status = "FAILED";
const failedTestNames = [];
const testResult = await runTests(firmId, templateType, singleHandle, testName, false, "none");
if (!testResult) {
status = "PASSED";
} else {
const testRun = testResult?.testRun;
if (testRun && testRun?.status === "completed") {
const errorsPresent = checkAllTestsErrorsPresent(testRun.tests);
if (errorsPresent === false) {
status = "PASSED";
} else {
// Extract failed test names
const testNames = Object.keys(testRun.tests).sort();
testNames.forEach((testName) => {
const testErrorsPresent = checkTestErrorsPresent(testName, testRun.tests);
if (testErrorsPresent) {
failedTestNames.push(testName);
}
});
}
}
}
if (status === "PASSED") {
consola.log(`${singleHandle}: ${status}`);
} else {
consola.log(`${singleHandle}: ${status}`);
// Display failed test names
failedTestNames.forEach((testName) => {
consola.log(` ${testName}: FAILED`);
});
}
return { handle: singleHandle, status, failedTestNames };
+ } catch (error) {
+ consola.error(`${singleHandle}: ERROR - ${error.message}`);
+ return { handle: singleHandle, status: "FAILED", failedTestNames: [], error: error.message };
+ }
};📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const runSingleHandle = async (singleHandle) => { | |
| let status = "FAILED"; | |
| const failedTestNames = []; | |
| const testResult = await runTests(firmId, templateType, singleHandle, testName, false, "none"); | |
| if (!testResult) { | |
| status = "PASSED"; | |
| consola.success(status); | |
| return status; | |
| } | |
| if (!testResult) { | |
| status = "PASSED"; | |
| } else { | |
| const testRun = testResult?.testRun; | |
| const testRun = testResult?.testRun; | |
| if (testRun && testRun?.status === "completed") { | |
| const errorsPresent = checkAllTestsErrorsPresent(testRun.tests); | |
| if (errorsPresent === false) { | |
| status = "PASSED"; | |
| } else { | |
| // Extract failed test names | |
| const testNames = Object.keys(testRun.tests).sort(); | |
| testNames.forEach((testName) => { | |
| const testErrorsPresent = checkTestErrorsPresent(testName, testRun.tests); | |
| if (testErrorsPresent) { | |
| failedTestNames.push(testName); | |
| } | |
| }); | |
| } | |
| } | |
| } | |
| if (testRun && testRun?.status === "completed") { | |
| const errorsPresent = checkAllTestsErrorsPresent(testRun.tests); | |
| if (errorsPresent === false) { | |
| status = "PASSED"; | |
| consola.success(status); | |
| return status; | |
| if (status === "PASSED") { | |
| consola.log(`${singleHandle}: ${status}`); | |
| } else { | |
| consola.log(`${singleHandle}: ${status}`); | |
| // Display failed test names | |
| failedTestNames.forEach((testName) => { | |
| consola.log(` ${testName}: FAILED`); | |
| }); | |
| } | |
| } | |
| consola.error(status); | |
| return status; | |
| return { handle: singleHandle, status, failedTestNames }; | |
| }; | |
| const runSingleHandle = async (singleHandle) => { | |
| try { | |
| let status = "FAILED"; | |
| const failedTestNames = []; | |
| const testResult = await runTests(firmId, templateType, singleHandle, testName, false, "none"); | |
| if (!testResult) { | |
| status = "PASSED"; | |
| } else { | |
| const testRun = testResult?.testRun; | |
| if (testRun && testRun?.status === "completed") { | |
| const errorsPresent = checkAllTestsErrorsPresent(testRun.tests); | |
| if (errorsPresent === false) { | |
| status = "PASSED"; | |
| } else { | |
| // Extract failed test names | |
| const testNames = Object.keys(testRun.tests).sort(); | |
| testNames.forEach((testName) => { | |
| const testErrorsPresent = checkTestErrorsPresent(testName, testRun.tests); | |
| if (testErrorsPresent) { | |
| failedTestNames.push(testName); | |
| } | |
| }); | |
| } | |
| } | |
| } | |
| if (status === "PASSED") { | |
| consola.log(`${singleHandle}: ${status}`); | |
| } else { | |
| consola.log(`${singleHandle}: ${status}`); | |
| // Display failed test names | |
| failedTestNames.forEach((testName) => { | |
| consola.log(` ${testName}: FAILED`); | |
| }); | |
| } | |
| return { handle: singleHandle, status, failedTestNames }; | |
| } catch (error) { | |
| consola.error(`${singleHandle}: ERROR - ${error.message}`); | |
| return { handle: singleHandle, status: "FAILED", failedTestNames: [], error: error.message }; | |
| } | |
| }; |
🤖 Prompt for AI Agents
In lib/liquidTestRunner.js around lines 430 to 468, the current runSingleHandle
flow can throw unhandled exceptions which will cause Promise.all to reject and
abort all parallel handle runs; wrap the body of runSingleHandle in a try/catch
(or ensure callers use Promise.allSettled) so any runtime error is caught, set a
clear status (e.g., "ERROR" or keep "FAILED"), capture the error message into
the returned object (or push into failedTestNames) and log it, and always return
{ handle, status, failedTestNames, error?: message } so the aggregator gets a
result for every handle even on exceptions.
Fixes # (link to the corresponding issue if applicable)
Description
Multiple handles/AT's can now be passed to the run-test command:
e.g.
silverfin run-test -f 200619 -h be_pit_wages_salaries_part_2 be_pit_wages_salaries --statusor
silverfin run-test -f 14400 -h 'Dubieuze debiteuren' 'Erelonen' --statusNote that I only allowed/activated it for the status flag since this will cover our usecase for now (Github actions).
This can be expanded later on if required.
Had some issues with the handle flag being rendered as an array in case there was only 1 element/handle, but some checks solved this issue.
Reporting is done per handle (and if differences, we show the units/tests as well).
QQ: in general, this is quicker than running them one by one, but in some cases, the API seems to be very slow. Is there a way in which we can avoid that?
Testing Instructions
Steps:
Author Checklist
Reviewer Checklist