Conversation
Generate changelog in
|
|
This PR has been automatically marked as stale because it has not been touched in the last 14 days. If you'd like to keep it open, please leave a comment or add the 'long-lived' label, otherwise it'll be closed in 7 days. |
|
This PR has been automatically marked as stale because it has not been touched in the last 14 days. If you'd like to keep it open, please leave a comment or add the 'long-lived' label, otherwise it'll be closed in 7 days. |
| } | ||
|
|
||
| #[pinned_drop] | ||
| impl<F> PinnedDrop for TracePropagationFuture<F> { |
There was a problem hiding this comment.
These are required to ensure that the zipkin thread local state is set when the endpoint metric layer handles its timer updates.
| .insert_values("p95", snapshot.value(0.95) / NANOS_PER_MICRO_F64) | ||
| .insert_values("p99", snapshot.value(0.99) / NANOS_PER_MICRO_F64) | ||
| .insert_values("p999", snapshot.value(0.999) / NANOS_PER_MICRO_F64) | ||
| .insert_values("max", (snapshot.max() as f64) / NANOS_PER_MICRO) |
There was a problem hiding this comment.
Drive-by fix - previously the max would be rounded down to the nearest whole microsecond while the percentiles wouldn't.
Before this PR
We didn't report any exemplars for metrics, making it a bit harder to investigate slowness or other badness.
After this PR
==COMMIT_MSG==
Histogram and timer metrics now report exemplars.
==COMMIT_MSG==
The behavior here matches WC-Java - we report at most a single exemplar per metric, corresponding to the measurement from the sampled trace made within the last reporting window which had the highest value (i.e. was the slowest for a timer metric).
Depends on palantir/witchcraft-rust-logging#40.
Metric output with an exemplar: