Skip to content

Implement Kafka Mirroring Sink Connector #4

@valdo404

Description

@valdo404

Kafka Mirroring Sink Connector Implementation

Description

Implement a Kafka sink connector for mirroring data between Kafka clusters, enabling cross-cluster replication and disaster recovery scenarios.

Tasks

  • Implement basic Kafka sink connector structure
  • Support topic mapping configuration (source to destination)
  • Add options for preserving partitioning scheme
  • Implement header and key preservation
  • Support offset tracking for monitoring replication lag
  • Add configuration for producer settings (batch size, compression, etc.)
  • Implement proper error handling and retry mechanisms
  • Support authentication methods (SASL, SSL)

Technical Details

  • Use Rust Kafka client libraries for efficient producer implementation
  • Implement batching for optimal throughput
  • Support exactly-once semantics with transaction IDs
  • Add comprehensive tests for different replication scenarios
  • Ensure proper handling of schema evolution

Acceptance Criteria

  • Records are correctly mirrored between Kafka clusters
  • All configuration options work as expected
  • Performance exceeds Java implementation
  • Memory usage remains low even with high throughput
  • All tests pass including failure recovery scenarios

Priority

High

Complexity

Medium

Release Target

v0.3.0

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or requestfeature:kafka-sinkFeatures related to Kafka sink connectorspriority:highHigh priority task that should be addressed in the next release

    Projects

    No projects

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions