feat: execute droptable and reopen tasks in batch#421
feat: execute droptable and reopen tasks in batch#421thweetkomputer wants to merge 2 commits intomainfrom
Conversation
WalkthroughThis PR introduces bounded in-flight request control for global operations in EloqStore. A new configuration option Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (2 warnings)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
src/eloq_store.cpp (1)
933-938: Make the global scheduler bounded end-to-end.Each handler now limits concurrent dispatch, but it still allocates one subrequest object per partition before the first send. That keeps heap growth proportional to partition count and repeats the same scheduler skeleton three times. A small shared helper that keeps only a bounded pool of live subrequests and creates them lazily would remove the duplication and make
max_global_request_batchcover memory as well as concurrency.Also applies to: 940-1008, 1176-1187, 1192-1260, 1398-1407, 1409-1496
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/eloq_store.cpp` around lines 933 - 938, The current handlers eagerly allocate one subrequest per partition (e.g., creating TruncateRequest instances and pushing into req->truncate_reqs_) which allows heap growth proportional to partition count; refactor by introducing a small shared helper (e.g., a BoundedSubrequestPool or make_lazy_subrequest_generator) that maintains at most max_global_request_batch live subrequests, creates TruncateRequest (and the other subrequest types used in the other handlers) lazily when about to dispatch, and recycles/completes them when a dispatched subrequest finishes; replace the per-partition eager loops that push into truncate_reqs_ with iteration that obtains subrequests from this pool and sends them, and reuse the same helper across the three similar handler sites (the ranges called out) so memory is bounded by max_global_request_batch as well as concurrency.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/kv_options.cpp`:
- Around line 178-182: When loading max_global_request_batch in KvOptions (the
reader.HasValue/GetUnsigned block for sec_run and max_global_request_batch),
ensure a zero value is not accepted: after reading the unsigned value, normalize
0 to 1 (or reject by returning an error); additionally add a defensive check in
ValidateOptions() to assert max_global_request_batch > 0 and fail validation if
not. Update the reader.GetUnsigned usage for max_global_request_batch and the
ValidateOptions() logic in KvOptions to reference the same semantic (must be >
0).
---
Nitpick comments:
In `@src/eloq_store.cpp`:
- Around line 933-938: The current handlers eagerly allocate one subrequest per
partition (e.g., creating TruncateRequest instances and pushing into
req->truncate_reqs_) which allows heap growth proportional to partition count;
refactor by introducing a small shared helper (e.g., a BoundedSubrequestPool or
make_lazy_subrequest_generator) that maintains at most max_global_request_batch
live subrequests, creates TruncateRequest (and the other subrequest types used
in the other handlers) lazily when about to dispatch, and recycles/completes
them when a dispatched subrequest finishes; replace the per-partition eager
loops that push into truncate_reqs_ with iteration that obtains subrequests from
this pool and sends them, and reuse the same helper across the three similar
handler sites (the ranges called out) so memory is bounded by
max_global_request_batch as well as concurrency.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 63e013d1-96b6-4731-94f5-cf2bba06162e
📒 Files selected for processing (3)
include/kv_options.hsrc/eloq_store.cppsrc/kv_options.cpp
| if (reader.HasValue(sec_run, "max_global_request_batch")) | ||
| { | ||
| max_global_request_batch = reader.GetUnsigned( | ||
| sec_run, "max_global_request_batch", max_global_request_batch); | ||
| } |
There was a problem hiding this comment.
Reject or normalize a zero batch limit here.
The schedulers later coerce max_global_request_batch == 0 to 1, so accepting 0 here leaves KvOptions in a state that never matches the effective runtime behavior. Please either validate > 0 in ValidateOptions() or normalize it once during load.
Based on learnings: "Enforce that max_inflight_write is greater than 0 before constructing IouringMgr ... Add an explicit check in the relevant initialization/ValidateOptions logic ..."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/kv_options.cpp` around lines 178 - 182, When loading
max_global_request_batch in KvOptions (the reader.HasValue/GetUnsigned block for
sec_run and max_global_request_batch), ensure a zero value is not accepted:
after reading the unsigned value, normalize 0 to 1 (or reject by returning an
error); additionally add a defensive check in ValidateOptions() to assert
max_global_request_batch > 0 and fail validation if not. Update the
reader.GetUnsigned usage for max_global_request_batch and the ValidateOptions()
logic in KvOptions to reference the same semantic (must be > 0).
Here are some reminders before you submit the pull request
fixes eloqdb/eloqstore#issue_idctest --test-dir build/tests/Summary by CodeRabbit
New Features
max_global_request_batchconfiguration option to control concurrent per-partition request limits for global operations (default: 1000).Refactor