forked from cockroachdb/cockroach
-
Notifications
You must be signed in to change notification settings - Fork 0
sql: update sql metrics to agg metrics #1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Draft
aa-joshi
wants to merge
2
commits into
CRDB-48253_clear_child_metric_on_flag_change
Choose a base branch
from
update_sql_metric_with_aggmetric
base: CRDB-48253_clear_child_metric_on_flag_change
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Draft
sql: update sql metrics to agg metrics #1
aa-joshi
wants to merge
2
commits into
CRDB-48253_clear_child_metric_on_flag_change
from
update_sql_metric_with_aggmetric
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
b5271e3 to
1386a3a
Compare
This patch introduces two new cluster settings: `sql.application_name_metrics.enabled` and `sql.database_name_metrics.enabled`. These settings are used to configure `app` and `db` labels in agg metrics with `StorageTypeCache` as children storage type respectively. We need to update labels for existing agg metrics to add/remove `app` and `db` labels so that labels will get reflected in subsequent prometheus scrape. For that, we have to update labels associated with agg metric. We have have renamed `clear` method from PR cockroachdb#143558 to `reinitialise` to extend it's functionality to update childSet label values. On added settings change, all tracked agg metrics with children storage type as `StorageTypeCache` are reinitialised with updated label values. Epic: CRDB-43153 Part of: CRDB-48253 Release note (sql change): New cluster settings `sql.application_name_metrics.enabled` and `sql.database_name_metrics.enabled` with default value of `false` can be set to true, to display application and database name on supported metrics respectively.
9c2a913 to
eca47ad
Compare
This patch updates some of the sql metrics definition to agg metric for supporting db and app name labelling. We have introduced `sql.metrics.application_name.enabled` and `sql.metrics.database_name.enabled` cluster settings as part of cockroachdb#143719. The updated metrics will export db and app name as labels based on mentioned cluster settings. These labels won't be persisted as part of TSDB. Epic: CRDB-43153 Part of: CRDB-48251 Release note (sql change): updated metric type to agg metrics to support additional db and app name labels as part of metric export.
eca47ad to
b244d02
Compare
aa-joshi
pushed a commit
that referenced
this pull request
Apr 14, 2025
This commit fixes two race conditions in the index split operation: 1. After setting the state of the left sub-partition to Ready, say that the split unexpectedly fails. Now say that the left sub- partition itself splits and is deleted. When the original split resumes, it will not be able to get the centroid for the left sub-partition, which is needed to run the K-means clustering algorithm. 2. As described by #1, it's possible that a splitting partition references target sub-partitions that are now missing from the index. This will trigger PartitionNotFound errors in insert code paths. The fixes are: 1. Update the logic so that vectors are first copied to the left and right sub-partitions before either sub-partition's state is updated from Updating to Ready. Only Ready sub-partitions can be split, so this should prevent race condition #1. 2. Update the insert logic so that searches of non-root partitions return multiple results, to make it extremely likely that a suitable insert partition will be found. For root partitions, check split target sub-partitions instead, since the splitting partition does not share a parent with its sub-partitions. Epic: CRDB-42943 Release note: None
aa-joshi
pushed a commit
that referenced
this pull request
May 13, 2025
144781: roachtest: add operation to probe ranges r=noahstho a=noahstho Since SRE uses crdb_internal.probe_ranges to test for prod cluster health, we would like to add this as a roach operation to make the DRT cluster as realistic as possible, and test for potential issues with crdb_internal.probe_ranges, so we know ASAP if our alerting coverage drops. **Background** crdb_internal.probe_ranges is a virtual table that quickly probes the entire keyspace of the KV layer to return a table of schema (range_id | error | end_to_end_latency_ms). It has minimal dependencies, so it functions even when a cluster is quite broken. And since it probes the entire keyspace, it is useful when something has already gone wrong, in narrowing down an issue to specific ranges. **What will this roach operation can catch?** If this roach operation fails, there is either a bug in `crdb_internal.probe_ranges`, so SRE is short a critical tool, or there is a serious bug is present in KV layer, and KV team will need to know asap. Ideally SRE would be first to know if there is an issue, and can hand off to KV if necessary. **Testing PR** Tested that it works on roachtest cluster with `roachtest run-operation noahthompsoncockroachlabscom-test probe-ranges`, and also was able to test that it successfully failed by forcing a range_error in DB, w/ ```./bin/roachtest run-operation noahthompsoncockroachlabscom-test2 probe-ranges Running operation probe-ranges on noahthompsoncockroachlabscom-test2. 2025/04/29 20:00:46 run_operation.go:145: [1] operation status: checking if operation probe-ranges/read dependencies are met 2025/04/29 20:00:47 run_operation.go:145: [1] operation status: running operation probe-ranges/read with run id 12821170976295052991 2025/04/29 20:00:47 probe_ranges.go:92: [1] operation status: executing crdb_internal.probe-ranges read statement against node 3 2025/04/29 20:00:47 probe_ranges.go:92: [1] operation status: found 1 errors while executing crdb_internal.probe-ranges read statement against node 3 2025/04/29 20:00:47 probe_ranges.go:92: [1] operation status: error on node 3 on range 4: test range error 2025/04/29 20:00:47 operation_impl.go:138: [1] operation failure #1: Found range errors when probing via crdb_internal.probe-ranges read statement against node 3 2025/04/29 20:00:47 run_operation.go:229: recovered from panic: o.Fatal() was called ``` **Future Work** We would like to also enable KVProber cluster setting to test this from a different angle, this should be a very easy change. Fixes: cockroachdb#102034 Release note: None Epic: None 145578: ttljob: add cluster setting to control concurrency r=rafiss a=rafiss Each processor of the TTL job creates a number of goroutines that operate concurrently to scan for expired rows and delete them. Previously, the concurrency was always equal to GOMAXPROCS. This new setting allows it to be overriden. Once this is merged, we should update support runbooks to discuss this setting. Informs: https://github.com/cockroachlabs/support/issues/3284 Epic: None Release note: None Co-authored-by: Noah Thompson <noah.thompson@cockroachlabs.com> Co-authored-by: Rafi Shamim <rafi@cockroachlabs.com>
aa-joshi
pushed a commit
that referenced
this pull request
Jul 17, 2025
149479: roachtest: exit with failure on github post errors r=herkolategan,DarrylWong a=williamchoe3 Fixes cockroachdb#147116 ### Changes #### Highlevel Changes Added a new failure path first by * adding a new counter in `testRunner` struct which get's incremented when `github.MaybePost()` (called in `testRunner.runWorkers()` and `testRunner.runTests()` )returns an error. When this count > 0, `testRunner.Run()` will return a new error `errGithubPostFailed` and when `main()` sees that error, it will return a new exit code `12` which will fail the pipeline (unlike exit code 10, 11) * ^ very similar to how provisioning errors are tracked and returned to `main()` * does not trigger test short circuiting mechanism because `testRunner.runWorkers()` doesn't return an error ``` type testRunner struct { ... // numGithubPostErrs Counts GitHub post errors across all workers numGithubPostErrs int32 ... } ... issue, err := github.MaybePost(t, issueInfo, l, output, params) // TODO add cluster specific args here if err != nil { shout(ctx, l, stdout, "failed to post issue: %s", err) atomic.AddInt32(&r.numGithubPostErrs, 1) } ``` #### Design In order to do verification via unit tests, i'm used to using something like Python's magic mock, but that's not available in GoLang so i opted for a Dependency Injection approach. (This was the best I could come up with, I wanted to avoid "if unit test, do this" logic. If anyone has any other approaches / suggestions let me know!) I made a new interface `GithubPoster` in such a way that the original `githubIssues` implements that new interface. I then pass this interface in function signatures all the way from `Run()` to `runTests()`. Then in the unit tests, I could pass a different implementation of `GithubPoster` that has a `MaybePost()` that always fails. `github.go` ``` type GithubPoster interface { MaybePost( t *testImpl, issueInfo *githubIssueInfo, l *logger.Logger, message string, params map[string]string) ( *issues.TestFailureIssue, error) } ``` Another issue with this approach is the original `githubIssues` has information that is cluster specific, but because of dependency injection, it's now a shared struct among all the workers, so it doesn't make sense to store certain fields that are worker dependent. For the fields that are worker specific, I created a new struct `githubIssueInfo` that is created in `runWorkers()`, similar to how `githubIssues` used to be created there. Note: I don't love the name `githubIssueInfo`, but i wanted to stick with a similar naming convention to `githubIssues`, open to name suggestions ``` // Original githubIssues type githubIssues struct { disable bool cluster *clusterImpl vmCreateOpts *vm.CreateOpts issuePoster func(context.Context, issues.Logger, issues.IssueFormatter, issues.PostRequest, *issues.Options) (*issues.TestFailureIssue, error) teamLoader func() (team.Map, error) } // New githubIssues type githubIssues struct { disable bool issuePoster func(context.Context, issues.Logger, issues.IssueFormatter, issues.PostRequest, *issues.Options) (*issues.TestFailureIssue, error) teamLoader func() (team.Map, error) } ``` All this was very verbose and didn't love that i had to change all the function signatures to do this, open to other ways to do verification. ### Misc Also first time writing in Go in like ~3 years very open to general go semantic feedback / best practices / design patterns ### Verification Diff of binary I used to manually confirm if you wanna see where I hardcoded to return errors: cockroachdb@611adcc #### Manual Test Logs > ➜ cockroach git:(wchoe/147116-github-err-will-fail-pipeline) ✗ tmp/roachtest run acceptance/build-info --cockroach /Users/wchoe/work/cockroachdb/cockroach/bin_linux/cockroach > ... > Running tests which match regex "acceptance/build-info" and are compatible with cloud "gce". > > fallback runner logs in: artifacts/roachtest.crdb.log > 2025/07/09 00:51:48 run.go:386: test runner logs in: artifacts/_runner-logs/test_runner-1752022308.log > test runner logs in: artifacts/_runner-logs/test_runner-1752022308.log > HTTP server listening on port 56238 on localhost: http://localhost:56238/ > 2025/07/09 00:51:48 run.go:148: global random seed: 1949199437086051249 > 2025/07/09 00:51:48 test_runner.go:398: test_run_id: will.choe-1752022308 > test_run_id: will.choe-1752022308 > [w0] 2025/07/09 00:51:48 work_pool.go:198: Acquired quota for 16 CPUs > [w0] 2025/07/09 00:51:48 cluster.go:3204: Using randomly chosen arch="amd64", acceptance/build-info > [w0] 2025/07/09 00:51:48 test_runner.go:798: Unable to create (or reuse) cluster for test acceptance/build-info due to: mocking. > Unable to create (or reuse) cluster for test acceptance/build-info due to: mocking. > 2025/07/09 00:51:48 test_impl.go:478: test failure #1: full stack retained in failure_1.log: (test_runner.go:873).func4: mocking [owner=test-eng] > 2025/07/09 00:51:48 test_impl.go:200: Runtime assertions disabled > [w0] 2025/07/09 00:51:48 test_runner.go:883: failed to post issue: mocking > failed to post issue: mocking > [w0] 2025/07/09 00:51:48 test_runner.go:1019: test failed: acceptance/build-info (run 1) > [w0] 2025/07/09 00:51:48 test_runner.go:732: Releasing quota for 16 CPUs > [w0] 2025/07/09 00:51:48 test_runner.go:744: No work remaining; runWorker is bailing out... > No work remaining; runWorker is bailing out... > [w0] 2025/07/09 00:51:48 test_runner.go:643: Worker exiting; no cluster to destroy. > 2025/07/09 00:51:48 test_runner.go:460: PASS > PASS > 2025/07/09 00:51:48 test_runner.go:465: 1 clusters could not be created and 1 errors occurred while posting to github > 1 clusters could not be created and 1 errors occurred while posting to github > 2025/07/09 00:51:48 run.go:200: runTests destroying all clusters > Error: some clusters could not be created > failed to POST to GitHub > ➜ cockroach git:(wchoe/147116-github-err-will-fail-pipeline) ✗ echo $? > 12 149913: crosscluster/physical: persist standby poller progress r=dt a=msbutler This patch sets the standby poller job's resolved time to the system time that standby descriptors have been updated to. This allows a reader tenant user to easily check that the poller job is running smoothly via SHOW JOB. Epic: none Release note: none Co-authored-by: William Choe <williamchoe3@gmail.com> Co-authored-by: Michael Butler <butler@cockroachlabs.com>
aa-joshi
pushed a commit
that referenced
this pull request
Nov 19, 2025
156830: storeliveness: smear storeliveness heartbeat messages to reduce goroutine spikes at heartbeat interval tick r=miraradeva,iskettaneh a=dodeca12 This PR introduces heartbeat smearing logic that batches and smears Store Liveness heartbeat sends across destination nodes to prevent thundering herd of goroutine spikes. ### Changes Core changes are within these files: ```sh pkg/kv/kvserver/storeliveness ├── support_manager.go # Rename SendAsync→EnqueueMessage, add smearing settings └── transport.go # Add smearing sender goroutine to transport.go which takes care of smearing when enabled ``` ### Background Previously, all stores in a cluster sent heartbeats immediately at each heartbeat interval tick. In large clusters with multi-store nodes, this created synchronized bursts of goroutine spikes that caused issues in other parts of the running CRDB process. ### Commits **Commit: Introduce heartbeat smearing** - Adds a smearing sender goroutine to `transport.go` that batches enqueued messages - Smears send signals across queues using `taskpacer` to spread traffic over time - Configurable via cluster settings (default: enabled) **How it works:** 1. Messages are enqueued via `EnqueueMessage()` into per-node queues 2. When `SendAllEnqueuedMessages()` is called, transport's smearing sender goroutine waits briefly to batch messages 3. Transport's smearing sender goroutine uses `taskpacer` to pace signaling to each queue over a smear duration 4. Each `processQueue` goroutine drains its queue and sends when signalled ### New Cluster Settings - `kv.store_liveness.heartbeat_smearing.enabled` (default: true) - Enable/disable smearing - `kv.store_liveness.heartbeat_smearing.refresh` (default: 10ms) - Batching window duration - `kv.store_liveness.heartbeat_smearing.smear` (default: 1ms) - Time to spread sends across queues ### Backward Compatibility - Feature is disabled by setting `kv.store_liveness.heartbeat_smearing.enabled=false` - When disabled, messages are sent immediately via the existing path (non-smearing mode) ### Testing - Added comprehensive unit tests verifying: - Messages batch correctly across multiple destinations - Smearing spreads signals over the configured time window - Smearing mode vs immediate mode behaviour - Message ordering and reliability All existing tests updated to call `SendAllEnqueuedMessages()` after enqueuing when smearing is enabled. #### Roachprod testing ##### Prototype #1 - Ran a prototype with a [similar design](cockroachdb#154942) (TL;DR of prototype, the heartbeats were smeared via `SupportManager` goroutines being put to sleep; this current design ensures that `SupportManager` goroutines do not get blocked) on a roachprod with 150 node cluster to verify smearing works. | Before changes (current behaviour on master) | After changes (prototype) | |--------|--------| | <img width="2680" height="570" alt="image" src="https://github.com/user-attachments/assets/32fe6ee0-437f-48eb-b3f1-087a3eafe6ac" /> | <img width="2692" height="634" alt="image" src="https://github.com/user-attachments/assets/66b5b82b-bbc4-4f47-a13e-5f6d42a1c6d4" /> | ##### Current changes - Ran a roachprod test with current changes but without the check for empty queues (more info - https://reviewable.io/reviews/cockroachdb/cockroach/156378#-). This check did end up proving vital, as the test results didn't show the expected smearing behaviour. - Ran a mini-roachprod test on [this prototype commit](https://github.com/cockroachdb/cockroach/pull/155317/files#diff-9282b4b1d9a2fe32fae81e5776eb081e58069b4bc7db76718873b75f026e16c1) (where the only real difference between my changes is the inclusion of the length check on the queues that have messages on that commit) showed expected smearing behaviour. <img width="1797" height="469" alt="image" src="https://github.com/user-attachments/assets/bd7778ef-9f8d-4dbf-8ed2-dac40e7fb03c" /> Fixes: cockroachdb#148210 Release note: None Co-authored-by: Swapneeth Gorantla <swapneeth.gorantla@cockroachlabs.com>
aa-joshi
pushed a commit
that referenced
this pull request
Jan 8, 2026
159877: kvserver: deflake TestReadLoadMetricAccounting r=tbg a=tbg `TestReadLoadMetricAccounting` has a history of flaking due to lease-related writes interfering with load metric measurements. Issue cockroachdb#141716 (and cockroachdb#141586) identified the same failure signature: ``` Error: Max difference between 0 and 85 allowed is 4, but difference was -85 ``` The root cause was identified by `@pav-kv:` an "unexpected" leader lease upgrade write was interfering with the test's write bytes measurements. PR cockroachdb#141843 added `tc.MaybeWaitForLeaseUpgrade()` to wait for lease upgrades before starting measurements. **The fix from cockroachdb#141843 IS present** in the failing SHA. However, the test still flaked with the same error signature (85 write bytes when expecting 0). The logs show: 1. AddSSTableRequest evaluated (test setup) 2. Many LeaseInfoRequest polls (from MaybeWaitForLeaseUpgrade) 3. RequestLeaseRequest (the lease upgrade write) 4. More LeaseInfoRequest polls 5. "lease is now of type: LeaseLeader" - **upgrade complete** 6. "test #1" - test loop begins 7. GetRequest evaluated (the actual test request) 8. **Assertion fails** - 85 write bytes observed The race condition is subtle: `MaybeWaitForLeaseUpgrade()` waits until `FindRangeLeaseEx()` reports the lease is upgraded, but it does **not** guarantee that the write bytes have been recorded to load stats. This is because stats are recorded "awkwardly late" on the client goroutine (`SendWithWriteBytes`). The fix: 1. Wraps each test case iteration in `SucceedsSoon` 2. Resets load stats, sends the request, checks results 3. If any stat doesn't match (due to background activity like lease upgrades), returns an error to trigger retry 4. Adds a comment noting that test cases must be idempotent (they are—all reads) ## Related Issues/PRs | Issue/PR | Status | Relevance | |----------|--------|-----------| | cockroachdb#159719 | OPEN | Current failure | | cockroachdb#141716 | CLOSED | Duplicate, Feb 2025 | | cockroachdb#141586 | CLOSED | Original issue, Feb 2025 | | cockroachdb#141843 | MERGED | Deflake attempt (wait for lease upgrade) | | cockroachdb#141599 | MERGED | Added logging to help debug | | cockroachdb#141905 | CLOSED | Duplicate | | cockroachdb#134799 | CLOSED | Older occurrence | This is more robust than trying to synchronize with specific background operations because it handles **any** source of interference, not just lease upgrades. Epic: none Closes cockroachdb#159719. Co-authored-by: Tobias Grieger <tobias.b.grieger@gmail.com>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This patch updates some of the sql metrics definition to agg metric for supporting db and app name labelling. We have introduced
sql.application_name_metrics.enabledandsql.database_name_metrics.enabledcluster settings as part of cockroachdb#143719. The updated metrics will export db and app name as labels based on mentioned cluster settings. These labels won't be persisted as part of TSDB.Epic: CRDB-43153
Part of: CRDB-48251
Release note (sql change): updated metric type to agg metrics to support additional db and app name labels as part of metric export.