|
| 1 | +--- |
| 2 | +description: Vibe coding implementation guidelines, strict rules, code patterns, and constraints for implementing Event-Driven Architecture (EDA) using 2026 standards. |
| 3 | +technology: Event-Driven Architecture |
| 4 | +domain: Architecture |
| 5 | +complexity: Architect |
| 6 | +last_evolution: 2026-03-27 |
| 7 | +vibe_coding_ready: true |
| 8 | +tags: [eda, implementation-guide, kafka, microservices, typescript, nestjs, architecture-patterns] |
| 9 | +topic: Event-Driven Implementation Guide |
| 10 | +--- |
| 11 | + |
| 12 | +<div align="center"> |
| 13 | + # 🛠️ EDA Implementation Guide (Code Blueprint) |
| 14 | +</div> |
| 15 | + |
| 16 | +--- |
| 17 | + |
| 18 | +This blueprint details strict coding patterns and anti-patterns for implementing Event-Driven Architecture, ensuring "at-least-once" delivery, schema registry compliance, and robust idempotency. |
| 19 | + |
| 20 | +> [!IMPORTANT] |
| 21 | +> **Implementation Contract:** All code must adhere to 2026 modern backend standards (Node.js 24+, TypeScript 5.5+, strict types, decorators, or class-based dependency injection). Services must integrate safely with message brokers (Kafka) without tightly coupling business logic. |
| 22 | +
|
| 23 | +## Entity & Handler Relationships |
| 24 | + |
| 25 | +```mermaid |
| 26 | +classDiagram |
| 27 | + class DomainEvent { |
| 28 | + +String eventId |
| 29 | + +String aggregateId |
| 30 | + +Date occurredOn |
| 31 | + +Object payload |
| 32 | + } |
| 33 | + class EventPublisher { |
| 34 | + <<interface>> |
| 35 | + +publish(DomainEvent) void |
| 36 | + } |
| 37 | + class KafkaAdapter { |
| 38 | + -Producer producer |
| 39 | + +publish(DomainEvent) void |
| 40 | + } |
| 41 | + class EventHandler { |
| 42 | + +handle(DomainEvent) void |
| 43 | + } |
| 44 | +
|
| 45 | + EventPublisher <|-- KafkaAdapter |
| 46 | + DomainEvent <-- EventHandler |
| 47 | + DomainEvent <-- EventPublisher |
| 48 | +``` |
| 49 | + |
| 50 | +--- |
| 51 | + |
| 52 | +## 1. Idempotent Consumers (Crucial) |
| 53 | + |
| 54 | +Because Kafka or RabbitMQ may deliver the same message twice (e.g., during a consumer rebalance), handlers must be purely idempotent. Processing the exact same `eventId` twice MUST NOT duplicate the business outcome (e.g., charging a credit card twice). |
| 55 | + |
| 56 | +### ❌ Bad Practice |
| 57 | +```typescript |
| 58 | +class PaymentEventHandler { |
| 59 | + async handle(event: OrderCreatedEvent) { |
| 60 | + // ❌ Blindly processing the payment every time the event is received! |
| 61 | + // A duplicate Kafka message will charge the user again. |
| 62 | + await this.stripeService.charge(event.payload.amount); |
| 63 | + await this.db.payments.insert({ orderId: event.aggregateId, status: 'PAID' }); |
| 64 | + } |
| 65 | +} |
| 66 | +``` |
| 67 | + |
| 68 | +### ✅ Best Practice |
| 69 | +```typescript |
| 70 | +class PaymentEventHandler { |
| 71 | + async handle(event: OrderCreatedEvent) { |
| 72 | + // ✅ 1. Check if we've already processed this specific event ID |
| 73 | + const alreadyProcessed = await this.db.processedEvents.exists(event.eventId); |
| 74 | + if (alreadyProcessed) { |
| 75 | + this.logger.warn(`Event ${event.eventId} already processed. Skipping.`); |
| 76 | + return; |
| 77 | + } |
| 78 | + |
| 79 | + // ✅ 2. Execute business logic idempotently |
| 80 | + await this.db.transaction(async (tx) => { |
| 81 | + await this.stripeService.charge(event.payload.amount); |
| 82 | + await tx.payments.insert({ orderId: event.aggregateId, status: 'PAID' }); |
| 83 | + |
| 84 | + // ✅ 3. Record the event ID to prevent duplicate processing |
| 85 | + await tx.processedEvents.insert({ id: event.eventId, processedAt: new Date() }); |
| 86 | + }); |
| 87 | + } |
| 88 | +} |
| 89 | +``` |
| 90 | + |
| 91 | +--- |
| 92 | + |
| 93 | +## 2. The Transactional Outbox Pattern |
| 94 | + |
| 95 | +To solve the "Dual-Write Problem" (saving state to the DB and publishing to Kafka reliably), we use an Outbox table. If the application crashes after saving to the DB but before publishing to Kafka, the message is permanently lost. |
| 96 | + |
| 97 | +### ❌ Bad Practice |
| 98 | +```typescript |
| 99 | +class OrderService { |
| 100 | + async createOrder(data: CreateOrderDto) { |
| 101 | + // ❌ Dual-write problem! |
| 102 | + const order = await this.db.orders.insert(data); // 1. Save to DB |
| 103 | + |
| 104 | + // If the server crashes HERE, the event is never published, |
| 105 | + // and downstream services never know the order was created. |
| 106 | + |
| 107 | + await this.kafkaProducer.send('orders.created', order); // 2. Publish to Broker |
| 108 | + } |
| 109 | +} |
| 110 | +``` |
| 111 | + |
| 112 | +### ✅ Best Practice |
| 113 | +```typescript |
| 114 | +class OrderService { |
| 115 | + async createOrder(data: CreateOrderDto) { |
| 116 | + // ✅ The Outbox Pattern: Save BOTH the business entity and the event |
| 117 | + // in the exact same ACID database transaction. |
| 118 | + await this.db.transaction(async (tx) => { |
| 119 | + const order = await tx.orders.insert(data); |
| 120 | + |
| 121 | + const outboxEvent = { |
| 122 | + aggregateType: 'Order', |
| 123 | + aggregateId: order.id, |
| 124 | + eventType: 'OrderCreated', |
| 125 | + payload: JSON.stringify(order), |
| 126 | + createdAt: new Date(), |
| 127 | + }; |
| 128 | + |
| 129 | + await tx.outbox.insert(outboxEvent); // Saves strictly to a local DB table |
| 130 | + }); |
| 131 | + |
| 132 | + // A separate background process (e.g., Debezium or a Polling Worker) |
| 133 | + // reads the 'outbox' table and safely publishes to Kafka. |
| 134 | + } |
| 135 | +} |
| 136 | +``` |
| 137 | + |
| 138 | +--- |
| 139 | + |
| 140 | +## 3. Strictly Typed Schemas (Schema Registry) |
| 141 | + |
| 142 | +Microservices evolve independently. If a publisher changes the shape of a JSON event payload, all downstream subscribers will break. Always enforce a Schema Registry (Avro, Protobuf, JSON Schema) for all events. |
| 143 | + |
| 144 | +### ✅ Best Practice (Avro Example) |
| 145 | +```typescript |
| 146 | +// 1. Define a strict Avro schema for the event |
| 147 | +const orderCreatedSchema = { |
| 148 | + type: 'record', |
| 149 | + name: 'OrderCreated', |
| 150 | + fields: [ |
| 151 | + { name: 'orderId', type: 'string' }, |
| 152 | + { name: 'amount', type: 'double' }, |
| 153 | + { name: 'customerId', type: 'string' } |
| 154 | + // Enforces backward compatibility rules via Confluent Schema Registry |
| 155 | + ] |
| 156 | +}; |
| 157 | + |
| 158 | +class OrderKafkaPublisher { |
| 159 | + async publish(event: DomainEvent) { |
| 160 | + // 2. The payload is validated and serialized against the Schema Registry |
| 161 | + // before it ever reaches the Kafka topic. |
| 162 | + const encodedPayload = await this.schemaRegistry.encode( |
| 163 | + 'orders.created-value', |
| 164 | + event.payload |
| 165 | + ); |
| 166 | + |
| 167 | + await this.producer.send({ |
| 168 | + topic: 'orders.created', |
| 169 | + messages: [{ key: event.aggregateId, value: encodedPayload }] |
| 170 | + }); |
| 171 | + } |
| 172 | +} |
| 173 | +``` |
| 174 | + |
| 175 | +--- |
| 176 | + |
| 177 | +<div align="center"> |
| 178 | + [Back to Main Blueprint](./readme.md) <br><br> |
| 179 | + <b>Master these implementation constraints to guarantee asynchronous consistency! 🛠️</b> |
| 180 | +</div> |
0 commit comments