Exposes JMX for brokers, and exemplify key cluster-level metric#93
Exposes JMX for brokers, and exemplify key cluster-level metric#93
Conversation
|
Given the countless options on how to consume Kafka metrics, I'd like to avoid making a specific implementation like #49 "core" by adding it to the kafka and zookeeper manifests. Instead I'd like this repo to encourage experimentation with different methods. Also, since v3.0.0 there's an ongoing transition from the old addons concept to a feature folder. The addition of more containers to core pods is something I haven't found how to separate in opt-in manifest files. We do have to make the JMX_PORT env var default, but it's rather standard for Kafka. The
Scrape times on minikube for this single metric is 5-15 for me. Not very good. |
|
Feature that worked in #49 too, but was less of an advantage because it was the same configmap as kafka, you can simply apply |
... though I've seen PartitionCount toggle between including the partitions in __consumer_offsets and not doing so.
|
This PR is a poor replacement for #49. If I kill one broker (after editing the init script so it won't go up again), my /metrics with the two test clients running alternate between: and Which means UnderReplicated is per broker, unlike with the test in #95. Got the scrape times down to .2 seconds again, that's a consolation :) I'll go ahead and explore more monitoring options. The addition of JMX_PORT, |
|
#128 replaced this PR. With it you get for example |
Already included in #49, but I would like to keep metrics opt-in while that PR adds a quite heavy container to the pod.
The exposed port can be utilized by kafka-manager (#83) - just tick the JMX box when adding a cluster - to see bytes in/out rates.