If the partition count or kafka IO size is large, then skip committin…#37510
If the partition count or kafka IO size is large, then skip committin…#37510kishorepola wants to merge 1 commit intoapache:masterfrom
Conversation
…g offsets that are not changed. Reduce kafka commit load
Summary of ChangesHello @kishorepola, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request optimizes the Kafka offset commitment process within Apache Beam's KafkaIO connector. Previously, the system would commit offsets for all topic-partitions, even if some were idle and their offsets hadn't changed. This behavior could lead to excessive metadata load on Kafka brokers. The implemented solution introduces in-memory tracking of the last committed offsets and ensures that only partitions with advanced offsets are committed, thereby reducing the commit load and improving overall system efficiency. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
|
Assigning reviewers: R: @ahmedabu98 for label java. Note: If you would like to opt out of this review, comment Available commands:
The PR bot will only process comments in the main thread (not review comments). |
|
Reminder, please take a look at this pr: @ahmedabu98 @sjvanrossum |
|
Hi @tomstepp , can you help review this. |
tomstepp
left a comment
There was a problem hiding this comment.
Thanks for contributing this!
| toCommit.put(tp, new OffsetAndMetadata(next)); | ||
| } | ||
| } | ||
|
|
There was a problem hiding this comment.
Maybe we could we add a debug log for # idle partitions?
| class KafkaUnboundedReader<K, V> extends UnboundedReader<KafkaRecord<K, V>> { | ||
|
|
||
| // Track last successfully committed offsets to suppress no-op commits for idle partitions. | ||
| private final Map<TopicPartition, Long> lastCommittedOffsets = new HashMap<>(); |
There was a problem hiding this comment.
Maybe we can also track last commit time per partition? We could try to commit if idle for more than some time (10 minutes for example).
I think this could also help for cases where customers may use time lag monitoring (tracking time since last commit).
There was a problem hiding this comment.
Can we add a unit test to cover this new behavior?
From a quick search maybe reuse or similar to sdks/java/io/kafka/src/test/java/org/apache/beam/sdk/io/kafka/KafkaCommitOffsetTest.java
| Long prev = lastCommittedOffsets.get(tp); | ||
|
|
||
| if (prev == null || next > prev) { | ||
| toCommit.put(tp, new OffsetAndMetadata(next)); |
There was a problem hiding this comment.
I like the idea of this change. Can we keep the existing java streams logic, but simply add a new filter step?
…g offsets that are not changed. Reduce kafka commit load
Please add a meaningful description for your change here
While committing offsets back to Kafka, Beam commits offsets back to all the topics and partitions in the KafkaIO. If some topic-partitions are idle, even then the same old offset is committed back. This causes lot of metadata pressure on the brokers if the kafka cluster has lot of idle partitions or cluster size is decently big.
Added in memory tracking for offsets.
Commit back only those offsets that are modified.
Thank you for your contribution! Follow this checklist to help us incorporate your contribution quickly and easily:
addresses #123), if applicable. This will automatically add a link to the pull request in the issue. If you would like the issue to automatically close on merging the pull request, commentfixes #<ISSUE NUMBER>instead.CHANGES.mdwith noteworthy changes.See the Contributor Guide for more tips on how to make review process smoother.
To check the build health, please visit https://github.com/apache/beam/blob/master/.test-infra/BUILD_STATUS.md
GitHub Actions Tests Status (on master branch)
See CI.md for more information about GitHub Actions CI or the workflows README to see a list of phrases to trigger workflows.