feat: segmented progress bar with running/pending counts#57
feat: segmented progress bar with running/pending counts#57yfarjoun wants to merge 2 commits intofg-labs:mainfrom
Conversation
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (2)
🚧 Files skipped from review as they are similar to previous changes (2)
📝 WalkthroughWalkthroughProgress-bar rendering in the workflow monitor was refactored to use four explicit segments—succeeded, failed, running, and pending—and the progress panel always combines the progress line and legend. Accessibility config gained a Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
snakesee/tui/monitor.py (1)
1674-1676:⚠️ Potential issue | 🟡 MinorUse canonical pending calculation for legend/count consistency.
Line 1733 recomputes pending with
total - completed - failed - running, which can diverge fromWorkflowProgress.pending_jobs(used elsewhere) and can display misleading values whentotal_jobsis 0 or when incomplete jobs exist.Proposed fix
- total = max(1, progress.total_jobs) + total_jobs = progress.total_jobs + total = max(1, total_jobs) completed = progress.completed_jobs + progress.failed_jobs percent = (completed / total) * 100 @@ - progress_line.append(f"({completed}/{total} jobs)", style="dim") + progress_line.append(f"({completed}/{total_jobs} jobs)", style="dim") @@ - pending = total - progress.completed_jobs - progress.failed_jobs - running + pending = progress.pending_jobsAlso applies to: 1733-1735
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@snakesee/tui/monitor.py` around lines 1674 - 1676, The code recomputes pending jobs manually causing divergence; replace manual pending calculation with the canonical WorkflowProgress.pending_jobs value to keep legend/count consistent: use progress.pending_jobs instead of deriving pending as total - completed - failed - running (and ensure total uses max(1, progress.total_jobs) only for percent calculation as before). Update both places that recompute pending (the block around percent calculation and the block at lines noted) to read pending = progress.pending_jobs and use that for display/legend/percent logic so counts stay consistent across the UI.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@snakesee/tui/monitor.py`:
- Around line 1654-1657: Clamp the computed segment widths to avoid negative
values when counters briefly exceed total: when computing succeeded_width,
failed_width, and running_width (the existing variables in
snakesee.tui.monitor.py), guard against total == 0 and use max(0, int((count /
total) * width)) for each, then compute pending_width as max(0, width -
succeeded_width - failed_width - running_width) so pending never goes negative;
update the calculations where succeeded_width, failed_width, running_width and
pending_width are assigned.
---
Outside diff comments:
In `@snakesee/tui/monitor.py`:
- Around line 1674-1676: The code recomputes pending jobs manually causing
divergence; replace manual pending calculation with the canonical
WorkflowProgress.pending_jobs value to keep legend/count consistent: use
progress.pending_jobs instead of deriving pending as total - completed - failed
- running (and ensure total uses max(1, progress.total_jobs) only for percent
calculation as before). Update both places that recompute pending (the block
around percent calculation and the block at lines noted) to read pending =
progress.pending_jobs and use that for display/legend/percent logic so counts
stay consistent across the UI.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
| succeeded_width = int((succeeded / total) * width) | ||
| failed_width = int((failed / total) * width) | ||
| remaining_width = width - succeeded_width - failed_width | ||
| running_width = int((running / total) * width) | ||
| pending_width = width - succeeded_width - failed_width - running_width |
There was a problem hiding this comment.
Guard segment width math against transient counter skew.
If counts briefly exceed total, pending_width becomes negative at Line 1657, and the bar under-renders. Clamp intermediate widths and pending width to non-negative bounds.
Proposed fix
- succeeded_width = int((succeeded / total) * width)
- failed_width = int((failed / total) * width)
- running_width = int((running / total) * width)
- pending_width = width - succeeded_width - failed_width - running_width
+ succeeded_width = min(width, max(0, int((succeeded / total) * width)))
+ failed_width = min(
+ width - succeeded_width, max(0, int((failed / total) * width))
+ )
+ running_width = min(
+ width - succeeded_width - failed_width,
+ max(0, int((running / total) * width)),
+ )
+ pending_width = max(0, width - succeeded_width - failed_width - running_width)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@snakesee/tui/monitor.py` around lines 1654 - 1657, Clamp the computed segment
widths to avoid negative values when counters briefly exceed total: when
computing succeeded_width, failed_width, and running_width (the existing
variables in snakesee.tui.monitor.py), guard against total == 0 and use max(0,
int((count / total) * width)) for each, then compute pending_width as max(0,
width - succeeded_width - failed_width - running_width) so pending never goes
negative; update the calculations where succeeded_width, failed_width,
running_width and pending_width are assigned.
nh13
left a comment
There was a problem hiding this comment.
Can you rebase on top of main and resolve the conflicts? I think the current main has some accessibility and "incomplete-job" features that this PR should consider.
| legend_parts.append(("█", "red", f"{progress.failed_jobs} failed")) | ||
| if running > 0: | ||
| legend_parts.append(("█", "yellow", f"{running} running")) | ||
| pending = total - progress.completed_jobs - progress.failed_jobs - running |
There was a problem hiding this comment.
issue:
Use progress.pending_jobs
There was a problem hiding this comment.
question:
Why is this removed? This loses the "currently running" versus "was running when the workflow stopped" distinction...
There was a problem hiding this comment.
if the pipeline has stopped-> was running
if the pipeline is running -> is running
how can there be both?
The progress bar now shows four segments proportional to total jobs: green (succeeded), red (failed), yellow (running), dim (pending). The legend always displays non-zero segment counts so the bar is informative even before any jobs complete. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
d752118 to
a1f6ff2
Compare
There was a problem hiding this comment.
🧹 Nitpick comments (2)
snakesee/tui/accessibility.py (1)
40-40: UpdateAccessibilityConfigdocs to includerunning.Line 40 adds a required field, but the
Attributessection still skipsrunning, so the schema docs are now stale.📝 Suggested doc update
class AccessibilityConfig: @@ Attributes: succeeded: Style for completed/succeeded jobs. failed: Style for failed jobs. + running: Style for currently running jobs. remaining: Style for remaining/pending jobs. incomplete: Style for incomplete jobs (workflow interrupted). show_legend: If True, always show the legend (not just on failure).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@snakesee/tui/accessibility.py` at line 40, The AccessibilityConfig docstring's Attributes section is missing the newly added required field "running"; update the AccessibilityConfig documentation to include an Attributes entry for "running: BarStyle" (e.g., "running: BarStyle — style used for the running state of progress bars") so the schema docs reflect the new required field and its purpose.tests/test_tui.py (1)
2248-2267: Add a default-mode assertion for non-zerorunninglegend entries.This test covers succeeded/failed/remaining, but not
running, which is central to this PR’s behavior change.✅ Suggested test strengthening
def test_default_mode_legend_shows_segments( self, tui_with_mocks: WorkflowMonitorTUI ) -> None: """In default mode, legend shows non-zero segments.""" from snakesee.tui.accessibility import DEFAULT_CONFIG tui_with_mocks._accessibility_config = DEFAULT_CONFIG - progress = make_workflow_progress(total_jobs=100, completed_jobs=50, failed_jobs=0) + progress = make_workflow_progress( + total_jobs=100, + completed_jobs=50, + failed_jobs=0, + running_jobs=[make_job_info(rule="align"), make_job_info(rule="sort")], + ) panel = tui_with_mocks._make_progress_panel(progress, None) @@ output = buf.getvalue() assert "succeeded" in output assert "failed" not in output + assert "running" in output assert "remaining" in output🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_tui.py` around lines 2248 - 2267, The test test_default_mode_legend_shows_segments should also assert that a non-zero running segment is shown: when you create progress via make_workflow_progress with running jobs > 0 (or by setting completed_jobs and failed_jobs so running is implied), render the panel via WorkflowMonitorTUI._make_progress_panel and assert that "running" appears in the console output; update the test to use DEFAULT_CONFIG as before and add an assertion like assert "running" in output so _make_progress_panel correctly includes running legend entries in default mode.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@snakesee/tui/accessibility.py`:
- Line 40: The AccessibilityConfig docstring's Attributes section is missing the
newly added required field "running"; update the AccessibilityConfig
documentation to include an Attributes entry for "running: BarStyle" (e.g.,
"running: BarStyle — style used for the running state of progress bars") so the
schema docs reflect the new required field and its purpose.
In `@tests/test_tui.py`:
- Around line 2248-2267: The test test_default_mode_legend_shows_segments should
also assert that a non-zero running segment is shown: when you create progress
via make_workflow_progress with running jobs > 0 (or by setting completed_jobs
and failed_jobs so running is implied), render the panel via
WorkflowMonitorTUI._make_progress_panel and assert that "running" appears in the
console output; update the test to use DEFAULT_CONFIG as before and add an
assertion like assert "running" in output so _make_progress_panel correctly
includes running legend entries in default mode.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 5c8dd0b8-5fb6-4ead-ad4c-daf9f22ee4a7
📒 Files selected for processing (3)
snakesee/tui/accessibility.pysnakesee/tui/monitor.pytests/test_tui.py
🚧 Files skipped from review as they are similar to previous changes (1)
- snakesee/tui/monitor.py
…verage Add missing `running` field to AccessibilityConfig docstring and strengthen test_default_mode_legend_shows_segments to assert running segment appears. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
ff77ff6 to
cff8035
Compare
Summary
Test plan
🤖 Generated with Claude Code
Summary by CodeRabbit