Skip to content

feat: segmented progress bar with running/pending counts#57

Open
yfarjoun wants to merge 2 commits intofg-labs:mainfrom
yfarjoun:yf_segmented-progress-bar
Open

feat: segmented progress bar with running/pending counts#57
yfarjoun wants to merge 2 commits intofg-labs:mainfrom
yfarjoun:yf_segmented-progress-bar

Conversation

@yfarjoun
Copy link
Copy Markdown
Contributor

@yfarjoun yfarjoun commented Mar 30, 2026

Summary

  • Progress bar now shows four proportional segments: green (succeeded), red (failed), yellow (running), dim (pending)
  • Legend always displays non-zero segment counts, so the bar is informative even before any jobs complete
  • Previously, running jobs were invisible in the progress bar — the bar was entirely yellow/dim until jobs finished

Test plan

  • Existing TUI tests pass (183/183)
  • Manual verification with a running pipeline to confirm yellow segments appear on startup

🤖 Generated with Claude Code

Summary by CodeRabbit

  • Bug Fixes
    • Progress bar now consistently shows four segments — succeeded, failed, running, and pending — with stable width allocation and unified legend rendering.
  • Style
    • Added distinct visual styling for the running segment and adjusted pending visuals for clearer status differentiation.
  • Tests
    • Updated TUI accessibility test expectations to match the new legend and segment behavior.

@yfarjoun yfarjoun requested a review from nh13 as a code owner March 30, 2026 21:00
@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Mar 30, 2026

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 42b4d7ce-0574-4e8b-8369-51e1f8676153

📥 Commits

Reviewing files that changed from the base of the PR and between ff77ff6 and cff8035.

📒 Files selected for processing (2)
  • snakesee/tui/accessibility.py
  • tests/test_tui.py
🚧 Files skipped from review as they are similar to previous changes (2)
  • snakesee/tui/accessibility.py
  • tests/test_tui.py

📝 Walkthrough

Walkthrough

Progress-bar rendering in the workflow monitor was refactored to use four explicit segments—succeeded, failed, running, and pending—and the progress panel always combines the progress line and legend. Accessibility config gained a running style; tests updated to reflect the new legend behavior.

Changes

Cohort / File(s) Summary
Progress Bar & Panel Display
snakesee/tui/monitor.py
Refactored _make_progress_bar() to model four segments (succeeded, failed, running, pending) and allocate remaining width to pending. _make_progress_panel() now derives running = len(progress.running_jobs), builds a legend from non-zero segments (succeeded/failed/running/pending), and always renders the combined progress line + legend panel.
Accessibility Config
snakesee/tui/accessibility.py
Added running: BarStyle to AccessibilityConfig and initialized it in DEFAULT_CONFIG and ACCESSIBLE_CONFIG with distinct char values and the label "running".
Tests
tests/test_tui.py
Updated TUI accessibility-mode test to include non-empty running_jobs; assertions now expect the legend to contain succeeded and remaining (and to exclude failed when failed_jobs=0). Test name/docstring adjusted accordingly.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Possibly related PRs

Poem

I nibble at the progress bar’s tail,
Four segments hopping, neat and pale,
Succeeded, failed, and running too,
Pending waits for jobs to queue,
A rabbit cheers the view anew 🐇✨

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately and concisely summarizes the main change—introducing a segmented progress bar with distinct running and pending job counts.
Description check ✅ Passed The description provides a clear summary of changes and test plan but is missing required checklist items and type-of-change classification from the template.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
snakesee/tui/monitor.py (1)

1674-1676: ⚠️ Potential issue | 🟡 Minor

Use canonical pending calculation for legend/count consistency.

Line 1733 recomputes pending with total - completed - failed - running, which can diverge from WorkflowProgress.pending_jobs (used elsewhere) and can display misleading values when total_jobs is 0 or when incomplete jobs exist.

Proposed fix
-        total = max(1, progress.total_jobs)
+        total_jobs = progress.total_jobs
+        total = max(1, total_jobs)
         completed = progress.completed_jobs + progress.failed_jobs
         percent = (completed / total) * 100
@@
-        progress_line.append(f"({completed}/{total} jobs)", style="dim")
+        progress_line.append(f"({completed}/{total_jobs} jobs)", style="dim")
@@
-        pending = total - progress.completed_jobs - progress.failed_jobs - running
+        pending = progress.pending_jobs

Also applies to: 1733-1735

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@snakesee/tui/monitor.py` around lines 1674 - 1676, The code recomputes
pending jobs manually causing divergence; replace manual pending calculation
with the canonical WorkflowProgress.pending_jobs value to keep legend/count
consistent: use progress.pending_jobs instead of deriving pending as total -
completed - failed - running (and ensure total uses max(1, progress.total_jobs)
only for percent calculation as before). Update both places that recompute
pending (the block around percent calculation and the block at lines noted) to
read pending = progress.pending_jobs and use that for display/legend/percent
logic so counts stay consistent across the UI.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@snakesee/tui/monitor.py`:
- Around line 1654-1657: Clamp the computed segment widths to avoid negative
values when counters briefly exceed total: when computing succeeded_width,
failed_width, and running_width (the existing variables in
snakesee.tui.monitor.py), guard against total == 0 and use max(0, int((count /
total) * width)) for each, then compute pending_width as max(0, width -
succeeded_width - failed_width - running_width) so pending never goes negative;
update the calculations where succeeded_width, failed_width, running_width and
pending_width are assigned.

---

Outside diff comments:
In `@snakesee/tui/monitor.py`:
- Around line 1674-1676: The code recomputes pending jobs manually causing
divergence; replace manual pending calculation with the canonical
WorkflowProgress.pending_jobs value to keep legend/count consistent: use
progress.pending_jobs instead of deriving pending as total - completed - failed
- running (and ensure total uses max(1, progress.total_jobs) only for percent
calculation as before). Update both places that recompute pending (the block
around percent calculation and the block at lines noted) to read pending =
progress.pending_jobs and use that for display/legend/percent logic so counts
stay consistent across the UI.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: f9a6b5c6-9a6f-4f94-a445-f96f035f5458

📥 Commits

Reviewing files that changed from the base of the PR and between 453d660 and d752118.

📒 Files selected for processing (1)
  • snakesee/tui/monitor.py

Comment on lines 1654 to +1657
succeeded_width = int((succeeded / total) * width)
failed_width = int((failed / total) * width)
remaining_width = width - succeeded_width - failed_width
running_width = int((running / total) * width)
pending_width = width - succeeded_width - failed_width - running_width
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Guard segment width math against transient counter skew.

If counts briefly exceed total, pending_width becomes negative at Line 1657, and the bar under-renders. Clamp intermediate widths and pending width to non-negative bounds.

Proposed fix
-        succeeded_width = int((succeeded / total) * width)
-        failed_width = int((failed / total) * width)
-        running_width = int((running / total) * width)
-        pending_width = width - succeeded_width - failed_width - running_width
+        succeeded_width = min(width, max(0, int((succeeded / total) * width)))
+        failed_width = min(
+            width - succeeded_width, max(0, int((failed / total) * width))
+        )
+        running_width = min(
+            width - succeeded_width - failed_width,
+            max(0, int((running / total) * width)),
+        )
+        pending_width = max(0, width - succeeded_width - failed_width - running_width)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@snakesee/tui/monitor.py` around lines 1654 - 1657, Clamp the computed segment
widths to avoid negative values when counters briefly exceed total: when
computing succeeded_width, failed_width, and running_width (the existing
variables in snakesee.tui.monitor.py), guard against total == 0 and use max(0,
int((count / total) * width)) for each, then compute pending_width as max(0,
width - succeeded_width - failed_width - running_width) so pending never goes
negative; update the calculations where succeeded_width, failed_width,
running_width and pending_width are assigned.

Copy link
Copy Markdown
Collaborator

@nh13 nh13 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you rebase on top of main and resolve the conflicts? I think the current main has some accessibility and "incomplete-job" features that this PR should consider.

legend_parts.append(("█", "red", f"{progress.failed_jobs} failed"))
if running > 0:
legend_parts.append(("█", "yellow", f"{running} running"))
pending = total - progress.completed_jobs - progress.failed_jobs - running
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue:
Use progress.pending_jobs

Comment on lines 1661 to 1664
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

question:
Why is this removed? This loses the "currently running" versus "was running when the workflow stopped" distinction...

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if the pipeline has stopped-> was running
if the pipeline is running -> is running

how can there be both?

The progress bar now shows four segments proportional to total jobs:
green (succeeded), red (failed), yellow (running), dim (pending).
The legend always displays non-zero segment counts so the bar is
informative even before any jobs complete.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@yfarjoun yfarjoun force-pushed the yf_segmented-progress-bar branch from d752118 to a1f6ff2 Compare March 31, 2026 13:46
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (2)
snakesee/tui/accessibility.py (1)

40-40: Update AccessibilityConfig docs to include running.

Line 40 adds a required field, but the Attributes section still skips running, so the schema docs are now stale.

📝 Suggested doc update
 class AccessibilityConfig:
@@
     Attributes:
         succeeded: Style for completed/succeeded jobs.
         failed: Style for failed jobs.
+        running: Style for currently running jobs.
         remaining: Style for remaining/pending jobs.
         incomplete: Style for incomplete jobs (workflow interrupted).
         show_legend: If True, always show the legend (not just on failure).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@snakesee/tui/accessibility.py` at line 40, The AccessibilityConfig
docstring's Attributes section is missing the newly added required field
"running"; update the AccessibilityConfig documentation to include an Attributes
entry for "running: BarStyle" (e.g., "running: BarStyle — style used for the
running state of progress bars") so the schema docs reflect the new required
field and its purpose.
tests/test_tui.py (1)

2248-2267: Add a default-mode assertion for non-zero running legend entries.

This test covers succeeded/failed/remaining, but not running, which is central to this PR’s behavior change.

✅ Suggested test strengthening
     def test_default_mode_legend_shows_segments(
         self, tui_with_mocks: WorkflowMonitorTUI
     ) -> None:
         """In default mode, legend shows non-zero segments."""
         from snakesee.tui.accessibility import DEFAULT_CONFIG

         tui_with_mocks._accessibility_config = DEFAULT_CONFIG
-        progress = make_workflow_progress(total_jobs=100, completed_jobs=50, failed_jobs=0)
+        progress = make_workflow_progress(
+            total_jobs=100,
+            completed_jobs=50,
+            failed_jobs=0,
+            running_jobs=[make_job_info(rule="align"), make_job_info(rule="sort")],
+        )
         panel = tui_with_mocks._make_progress_panel(progress, None)
@@
         output = buf.getvalue()
         assert "succeeded" in output
         assert "failed" not in output
+        assert "running" in output
         assert "remaining" in output
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/test_tui.py` around lines 2248 - 2267, The test
test_default_mode_legend_shows_segments should also assert that a non-zero
running segment is shown: when you create progress via make_workflow_progress
with running jobs > 0 (or by setting completed_jobs and failed_jobs so running
is implied), render the panel via WorkflowMonitorTUI._make_progress_panel and
assert that "running" appears in the console output; update the test to use
DEFAULT_CONFIG as before and add an assertion like assert "running" in output so
_make_progress_panel correctly includes running legend entries in default mode.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@snakesee/tui/accessibility.py`:
- Line 40: The AccessibilityConfig docstring's Attributes section is missing the
newly added required field "running"; update the AccessibilityConfig
documentation to include an Attributes entry for "running: BarStyle" (e.g.,
"running: BarStyle — style used for the running state of progress bars") so the
schema docs reflect the new required field and its purpose.

In `@tests/test_tui.py`:
- Around line 2248-2267: The test test_default_mode_legend_shows_segments should
also assert that a non-zero running segment is shown: when you create progress
via make_workflow_progress with running jobs > 0 (or by setting completed_jobs
and failed_jobs so running is implied), render the panel via
WorkflowMonitorTUI._make_progress_panel and assert that "running" appears in the
console output; update the test to use DEFAULT_CONFIG as before and add an
assertion like assert "running" in output so _make_progress_panel correctly
includes running legend entries in default mode.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 5c8dd0b8-5fb6-4ead-ad4c-daf9f22ee4a7

📥 Commits

Reviewing files that changed from the base of the PR and between d752118 and a1f6ff2.

📒 Files selected for processing (3)
  • snakesee/tui/accessibility.py
  • snakesee/tui/monitor.py
  • tests/test_tui.py
🚧 Files skipped from review as they are similar to previous changes (1)
  • snakesee/tui/monitor.py

…verage

Add missing `running` field to AccessibilityConfig docstring and strengthen
test_default_mode_legend_shows_segments to assert running segment appears.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@yfarjoun yfarjoun force-pushed the yf_segmented-progress-bar branch from ff77ff6 to cff8035 Compare April 2, 2026 18:37
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants