Conversation
- add current-page selection controls in the sticky sessions table - add a bulk delete action with confirmation dialog - preserve filters and pagination after bulk deletion - report partial failures for rows that could not be deleted - add backend and frontend test coverage for bulk delete behavior
|
One note, I could not get any of the tests in tests/unit/test_load_balancer_concurrency.py to actually run. They just hang indefinitely... |
|
@codex review |
…ass_delete # Conflicts: # tests/integration/test_http_responses_bridge.py
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 7c64e56b29
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| if target in seen: | ||
| continue | ||
| seen.add(target) | ||
| removed = await self._repository.delete(key, kind=kind) |
There was a problem hiding this comment.
Batch bulk sticky-session deletes in a single transaction
Calling self._repository.delete(...) inside the per-entry loop turns one bulk request into N separate delete+commit operations (the repository method commits on every call), which is a major regression for requests near the 500-item API limit. This increases latency and makes failures mid-loop leave partially applied deletes with no complete result payload, unlike the previous batched behavior. Please keep the bulk path transactional/batched and compute deleted/failed from that batched outcome.
Useful? React with 👍 / 👎.
frontend/src/test/mocks/handlers.ts
Outdated
| const deleted = payload.sessions.filter((session) => | ||
| targets.has(`${session.kind}:${session.key}`), |
There was a problem hiding this comment.
Report only truly deleted sessions in MSW bulk-delete response
The mock deleted list is built from payload.sessions filtered by targets, but targets is created from the same payload, so every requested row is reported as deleted even if it was never present in state.stickySessions. This makes mock responses internally inconsistent (deletedCount can be smaller than deleted.length) and hides partial-failure behavior in frontend tests.
Useful? React with 👍 / 👎.
|
I will fix this up shortly! |
…_mass_delete # Conflicts: # tests/integration/test_http_responses_bridge.py
Restore transactional bulk deletion for sticky sessions and derive deleted/failed entries from the batched result. Update the frontend MSW handler to report only truly deleted sessions and surface not_found failures for missing rows.
…odex-lb into feat/sticky_session_mass_delete
|
@codex review |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: fa9eadc2df
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| if target in seen: | ||
| continue | ||
| seen.add(target) |
There was a problem hiding this comment.
Preserve duplicate targets in bulk-delete reporting
The new bulk-delete response is meant to report per-request outcomes, but this deduplication step drops repeated (key, kind) entries before execution. If a client submits duplicates (which the request schema currently allows), the response will under-report what happened by collapsing multiple requested deletions into one result, so consumers cannot reliably reconcile each submitted row (e.g., first delete succeeds, second should be not_found). Either preserve request multiplicity when building outcomes or reject duplicates at validation time.
Useful? React with 👍 / 👎.
I realize the functionality was already created, but I felt I could polish it a little bit...
Better backend delete result contract
Upstream returned only:
This now returns:
Reference:
Best-effort partial failure handling
Upstream bulk delete assumed success as a count.
This explicitly reports rows that could not be deleted instead of collapsing everything into one number.
Better frontend behavior after bulk delete
This branch adds:
Reference:
More explicit UX wording
Changed the bulk action from “Remove selected” to “Delete Sessions” and made the confirmation mention failure reporting.
OpenSpec coverage
Added a proper spec change under: