Skip to content

Conversation

@alepane21
Copy link
Contributor

@alepane21 alepane21 commented Jan 29, 2026

Summary by CodeRabbit

  • New Features

    • CLI shows a progress bar during publish (when not in quiet/text mode)
    • Publishing runs in batches for more reliable processing
  • Improvements

    • Aggregates batch results for consistent per-operation output and summary
    • Controlled parallelization to improve throughput while preserving lifecycle handling
  • Bug Fixes / Validation

    • Rejects oversized publish requests with a clear error
  • Tests

    • Added test covering oversized payload rejection

✏️ Tip: You can customize this high-level summary in your review settings.

Checklist

  • I have discussed my proposed changes in an issue and have received approval to proceed.
  • I have followed the coding standards of the project.
  • Tests or benchmarks have been added or updated.
  • Documentation has been updated on https://github.com/wundergraph/cosmo-docs.
  • I have read the Contributors Guide.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 29, 2026

Walkthrough

Client now publishes operations in batches of 100 with a CLI progress indicator and aggregates results; server enforces a 100-operation payload limit (returns ResourceExhausted) and tests verify rejection when sending 101 operations.

Changes

Cohort / File(s) Summary
CLI Batched Processing
cli/src/commands/operations/commands/push.ts
Implements chunked publishing with OPERATION_BATCH_SIZE = 100, iterates over batches, shows a progress bar when applicable, aggregates per-batch PublishedOperation results, adapts text/JSON output to aggregated data, and ensures progress bar lifecycle and per-batch error handling.
Backend Validation & Concurrency
controlplane/src/core/bufservices/persisted-operation/publishPersistedOperations.ts
Adds MAX_PERSISTED_OPERATIONS = 100, PARALLEL_PERSISTED_OPERATIONS_LIMIT = 25; imports Code, ConnectError, HandlerContext; changes publishPersistedOperations to accept ctx: HandlerContext; validates request size and throws ConnectError with Code.ResourceExhausted when >100 operations; introduces limited parallel per-operation processing and structured result accumulation.
Tests
controlplane/test/persisted-operations.test.ts
Adds tests asserting publishing 101 operations is rejected with a ConnectError whose code is ResourceExhausted; imports Code and ConnectError.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

🚥 Pre-merge checks | ✅ 2 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'feat: max number of persistent operations per request' accurately reflects the main change: introducing a maximum limit (MAX_PERSISTED_OPERATIONS = 100) for operations per request with validation and error handling.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings

Comment @coderabbitai help to get the list of available commands and usage tips.

@codecov
Copy link

codecov bot commented Jan 29, 2026

Codecov Report

❌ Patch coverage is 26.31579% with 98 lines in your changes missing coverage. Please review.
✅ Project coverage is 56.09%. Comparing base (b188b46) to head (7fbeb36).

Files with missing lines Patch % Lines
cli/src/commands/operations/commands/push.ts 1.28% 77 Missing ⚠️
.../persisted-operation/publishPersistedOperations.ts 61.81% 21 Missing ⚠️

❌ Your patch check has failed because the patch coverage (26.31%) is below the target coverage (90.00%). You can increase the patch coverage or adjust the target coverage.

Additional details and impacted files
@@            Coverage Diff             @@
##            main    #2477       +/-   ##
==========================================
+ Coverage   1.50%   56.09%   +54.58%     
==========================================
  Files        292      423      +131     
  Lines      46816    52886     +6070     
  Branches     431     4650     +4219     
==========================================
+ Hits         703    29664    +28961     
+ Misses     45830    23198    -22632     
+ Partials     283       24      -259     
Files with missing lines Coverage Δ
.../persisted-operation/publishPersistedOperations.ts 63.76% <61.81%> (ø)
cli/src/commands/operations/commands/push.ts 35.35% <1.28%> (ø)

... and 713 files with indirect coverage changes

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Fix all issues with AI agents
In `@cli/src/commands/operations/commands/push.ts`:
- Around line 167-199: Wrap the for loop that calls
opts.client.platform.publishPersistedOperations in a try/finally so the progress
bar is always stopped on error: move the existing for (let start = 0; start <
operations.length; start += OPERATION_BATCH_SIZE) { ... } into a try block and
in the finally call if (bar) bar.stop();; keep the per-chunk error handling
(result.response?.code check and command.error) inside the loop and rethrow or
let the thrown RPC error propagate so the finally cleans up; reference symbols:
bar, publishPersistedOperations, operations, OPERATION_BATCH_SIZE, processed,
publishedOperations.
- Around line 235-243: Replace the traditional for loop iterating with an index
variable by using a for...of with entries() to get both index and element;
specifically change the loop over publishedOperations so you use for (const [ii,
op] of publishedOperations.entries()) and then set returnedOperations[op.id] = {
hash: op.hash, contents: operations[ii].contents, status:
jsonOperationStatus(op.status), operationNames: op.operationNames ?? [] }; this
keeps access to both publishedOperations[ii] and operations[ii] while satisfying
the unicorn/no-for-loop rule and preserving existing behavior (symbols:
publishedOperations, operations, returnedOperations, jsonOperationStatus).

alepane21 and others added 2 commits January 29, 2026 12:13
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Fix all issues with AI agents
In `@cli/src/commands/operations/commands/push.ts`:
- Around line 192-195: The progress bar undercounts because it increments by
result.operations.length which can be smaller than the sent chunk if the backend
deduplicates; update the progress using the number of inputs attempted (e.g.,
the local chunk/batch length) instead of result.operations.length. Locate the
block where publishedOperations, processed, and bar.update are used (symbols:
publishedOperations, processed, bar.update, result.operations.length) and change
processed += result.operations.length to processed += chunk.length (or the
variable that holds the sent items), and call bar.update(processed); still push
the returned result.operations into publishedOperations but derive progress from
the attempted-send count.
- Around line 213-216: The text output concatenation accesses op.operationNames
directly which can be undefined at runtime unlike the JSON path that uses a
nullish guard; update the text branch in push.ts (the block that builds message,
referencing humanReadableOperationStatus and op.operationNames) to guard
operationNames with a nullish coalescing fallback (e.g., op.operationNames ??
[]) before checking length and joining so it matches the JSON handler's
defensive pattern and avoids runtime errors.
🧹 Nitpick comments (1)
cli/src/commands/operations/commands/push.ts (1)

237-243: Avoid index-based access when combining batched API results; use ID-based mapping instead.

The code accumulates operations across multiple API batches but reconstructs the JSON output using index alignment. While the current API implementation preserves ordering, this assumes an undocumented contract. If the API behavior changes—reordering results, filtering operations, or querying in a different order—the index lookup will silently return incorrect contents without any validation.

♻️ Suggested refactor
-        const returnedOperations: Record<string, OperationOutput> = {};
-        for (const [ii, op] of publishedOperations.entries()) {
+        const contentsById = new Map(operations.map((op) => [op.id, op.contents]));
+        const returnedOperations: Record<string, OperationOutput> = {};
+        for (const op of publishedOperations) {
           returnedOperations[op.id] = {
             hash: op.hash,
-            contents: operations[ii].contents,
+            contents: contentsById.get(op.id) ?? '',
             status: jsonOperationStatus(op.status),
             operationNames: op.operationNames ?? [],
           };
         }

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants