Skip to content

Commit a4d1b46

Browse files
Merge pull request #22 from beginwebdev2002/arch/event-driven-architecture-4464069809475679234
feat(arch): initialize specialized documentation for event-driven architecture
2 parents 86152cc + f8a6746 commit a4d1b46

6 files changed

Lines changed: 463 additions & 0 deletions

File tree

Lines changed: 86 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,86 @@
1+
---
2+
description: Vibe coding guidelines for the asynchronous request and data flow lifecycle in an Event-Driven Architecture (EDA).
3+
technology: Event-Driven Architecture
4+
domain: Architecture
5+
complexity: Architect
6+
last_evolution: 2026-03-27
7+
vibe_coding_ready: true
8+
tags: [eda, data-flow, sequence-diagram, asynchronous, messaging, event-lifecycle]
9+
topic: Event-Driven Data Flow
10+
---
11+
12+
<div align="center">
13+
# 📊 EDA Data Flow (Sequence Blueprint)
14+
</div>
15+
16+
---
17+
18+
This document illustrates the execution lifecycle of a distributed, asynchronous event-driven system. It defines the path an initial synchronous request takes as it propagates across independent microservices via a message broker.
19+
20+
## Mental Model & Asynchronous Lifecycle
21+
22+
The architectural contract is simple:
23+
- The **Ingress Gateway (API)** accepts the synchronous HTTP request from the User.
24+
- The **API Gateway** immediately validates the request and queues a Command/Event on the **Message Broker (Kafka/RabbitMQ)**. It returns HTTP 202 Accepted.
25+
- Downstream **Consumers (Subscribers)** independently poll/listen to the broker, performing background work without blocking the UI.
26+
- Finally, the UI relies on WebSocket, Server-Sent Events (SSE), or polling for real-time completion status.
27+
28+
> [!IMPORTANT]
29+
> **Data Flow Constraint:** A microservice handling an event MUST NOT synchronously invoke another microservice. It must process the event, update its localized database, and optionally emit a subsequent domain event.
30+
31+
### Sequence Diagram: Distributed E-Commerce Checkout
32+
33+
```mermaid
34+
sequenceDiagram
35+
autonumber
36+
actor Client as User (Frontend)
37+
participant API as API Gateway (REST)
38+
participant Broker as Message Broker (Kafka)
39+
participant OrderSvc as Order Service
40+
participant PaySvc as Payment Service
41+
participant NotifySvc as Notification Service
42+
43+
Client->>API: POST /checkout (Cart DTO)
44+
API-->>Broker: Publish [CheckoutInitiatedEvent]
45+
API-->>Client: HTTP 202 Accepted (Order Pending)
46+
47+
Broker-->>OrderSvc: Consume [CheckoutInitiatedEvent]
48+
OrderSvc->>OrderSvc: Create Pending Order (DB)
49+
OrderSvc-->>Broker: Publish [OrderCreatedEvent]
50+
51+
Broker-->>PaySvc: Consume [OrderCreatedEvent]
52+
PaySvc->>PaySvc: Process Stripe Payment
53+
54+
alt Payment Success
55+
PaySvc-->>Broker: Publish [PaymentSucceededEvent]
56+
Broker-->>OrderSvc: Consume [PaymentSucceededEvent]
57+
OrderSvc->>OrderSvc: Update Order Status -> Confirmed
58+
Broker-->>NotifySvc: Consume [PaymentSucceededEvent]
59+
NotifySvc->>Client: Push Notification / Email (Success)
60+
else Payment Failure
61+
PaySvc-->>Broker: Publish [PaymentFailedEvent]
62+
Broker-->>OrderSvc: Consume [PaymentFailedEvent]
63+
OrderSvc->>OrderSvc: Update Order Status -> Failed
64+
Broker-->>NotifySvc: Consume [PaymentFailedEvent]
65+
NotifySvc->>Client: Push Notification / Email (Failure)
66+
end
67+
```
68+
69+
---
70+
71+
## The Outbox Pattern (Reliable Publishing)
72+
73+
To ensure dual-write safety (saving state in the local DB and publishing the event to Kafka simultaneously), EDA relies on the **Transactional Outbox Pattern**.
74+
75+
1. The service begins a local DB transaction.
76+
2. The service saves business entity data (e.g., `orders` table).
77+
3. The service inserts an event record in an `outbox` table in the SAME transaction.
78+
4. The service commits the transaction.
79+
5. A background process (e.g., Debezium, CDC) reads the `outbox` table and publishes the messages to Kafka, ensuring "at-least-once" delivery.
80+
81+
---
82+
83+
<div align="center">
84+
[Back to Main Blueprint](./readme.md) <br><br>
85+
<b>Master the event lifecycle to prevent distributed monoliths! 🌊</b>
86+
</div>
Lines changed: 94 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,94 @@
1+
---
2+
description: Vibe coding guidelines for the folder structure and structural hierarchy of Event-Driven Architecture (EDA) projects.
3+
technology: Event-Driven Architecture
4+
domain: Architecture
5+
complexity: Architect
6+
last_evolution: 2026-03-27
7+
vibe_coding_ready: true
8+
tags: [eda, folder-structure, architecture-hierarchy, backend, microservices]
9+
topic: Event-Driven Folder Structure
10+
---
11+
12+
<div align="center">
13+
# 📁 EDA Folder Structure (Hierarchy Blueprint)
14+
</div>
15+
16+
---
17+
18+
This document outlines the optimal 2026-grade folder structure for an Event-Driven microservice (or bounded context). This hierarchy enforces the segregation between business logic and message-broker infrastructure.
19+
20+
## Folder Hierarchy (Mental Model)
21+
22+
A robust EDA microservice separates its core domain from its external adapters (Publishers and Subscribers). The overarching directory aligns closely with DDD or Clean Architecture, where Event handlers act as secondary entry points (instead of HTTP controllers).
23+
24+
> [!NOTE]
25+
> **Constraint:** Domain layers MUST NOT depend on the specific message broker (Kafka, AWS EventBridge). Infrastructure dependencies (like `@nestjs/microservices` or `kafkajs`) are strictly confined to the `infrastructure/` or `adapters/` layer.
26+
27+
### System Diagram: Layered Hierarchy
28+
29+
```mermaid
30+
graph TD
31+
Root[Microservice Root] --> Domain[core/domain]
32+
Root --> App[core/application]
33+
Root --> Infra[infrastructure]
34+
35+
Infra --> DB[database (Adapters)]
36+
Infra --> Msg[messaging (Broker Integrations)]
37+
38+
Msg --> Pub[publishers (Producers)]
39+
Msg --> Sub[subscribers (Consumers)]
40+
Msg --> Config[kafka-config]
41+
42+
App --> Handlers[Command/Event Handlers]
43+
Handlers -.-> Pub
44+
45+
%% Apply strict styling tokens
46+
classDef default fill:#e1f5fe,stroke:#03a9f4,stroke-width:2px,color:#000;
47+
classDef component fill:#e8f5e9,stroke:#4caf50,stroke-width:2px,color:#000;
48+
classDef layout fill:#f3e5f5,stroke:#9c27b0,stroke-width:2px,color:#000;
49+
50+
class Root layout;
51+
class Domain,App,Handlers component;
52+
class Infra,DB,Msg,Pub,Sub,Config default;
53+
```
54+
55+
---
56+
57+
## Detailed Directory Tree
58+
59+
```text
60+
src/
61+
├── 📁 core/ # Pure business logic (No infra dependencies)
62+
│ ├── 📁 domain/ # Aggregates, Value Objects, Domain Events
63+
│ │ ├── events/ # Internal domain event types (e.g., OrderCreated)
64+
│ │ └── models/ # Business entities
65+
│ └── 📁 application/ # Use cases orchestration
66+
│ ├── commands/ # Sync logic executed before emitting events
67+
│ └── handlers/ # Logic that responds to consumed events
68+
69+
├── 📁 infrastructure/ # Framework and Broker integrations
70+
│ ├── 📁 messaging/ # The Event-Driven core
71+
│ │ ├── 📁 config/ # Kafka client configuration, schemas
72+
│ │ ├── 📁 publishers/ # Outbound adapters (Emit events to Broker)
73+
│ │ │ └── OrderPublisher.ts # Implements IEventPublisher from Core
74+
│ │ ├── 📁 subscribers/ # Inbound adapters (Listen to Broker queues)
75+
│ │ │ └── PaymentConsumer.ts # Routes Kafka messages to Application handlers
76+
│ │ └── 📁 schemas/ # AsyncAPI/Avro/Protobuf message schemas
77+
│ └── 📁 database/ # DB adapters (Repositories, Outbox pattern)
78+
79+
└── main.ts # Application bootstrap (Starts HTTP + Consumers)
80+
```
81+
82+
### Explanation of Key Directories
83+
84+
1. **`core/domain/events/`**: These are internal representations of an event. They are purely business-focused (e.g., `OrderPlacedDomainEvent`). They know nothing about Kafka serialization.
85+
2. **`infrastructure/messaging/publishers/`**: This directory contains implementations of your output ports. It serializes the internal domain event into a payload (JSON/Avro) and publishes it to the external topic.
86+
3. **`infrastructure/messaging/subscribers/`**: This directory acts exactly like HTTP Controllers. A consumer listens to a Kafka topic, deserializes the message, and hands it off to a `core/application/handlers/` class to perform the actual business logic.
87+
4. **`infrastructure/messaging/schemas/`**: Strongly-typed schemas (like Protobuf or Avro) defining the contract for events passing through the broker.
88+
89+
---
90+
91+
<div align="center">
92+
[Back to Main Blueprint](./readme.md) <br><br>
93+
<b>A clean directory tree prevents tightly-coupled broker dependencies! 📁</b>
94+
</div>
Lines changed: 180 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,180 @@
1+
---
2+
description: Vibe coding implementation guidelines, strict rules, code patterns, and constraints for implementing Event-Driven Architecture (EDA) using 2026 standards.
3+
technology: Event-Driven Architecture
4+
domain: Architecture
5+
complexity: Architect
6+
last_evolution: 2026-03-27
7+
vibe_coding_ready: true
8+
tags: [eda, implementation-guide, kafka, microservices, typescript, nestjs, architecture-patterns]
9+
topic: Event-Driven Implementation Guide
10+
---
11+
12+
<div align="center">
13+
# 🛠️ EDA Implementation Guide (Code Blueprint)
14+
</div>
15+
16+
---
17+
18+
This blueprint details strict coding patterns and anti-patterns for implementing Event-Driven Architecture, ensuring "at-least-once" delivery, schema registry compliance, and robust idempotency.
19+
20+
> [!IMPORTANT]
21+
> **Implementation Contract:** All code must adhere to 2026 modern backend standards (Node.js 24+, TypeScript 5.5+, strict types, decorators, or class-based dependency injection). Services must integrate safely with message brokers (Kafka) without tightly coupling business logic.
22+
23+
## Entity & Handler Relationships
24+
25+
```mermaid
26+
classDiagram
27+
class DomainEvent {
28+
+String eventId
29+
+String aggregateId
30+
+Date occurredOn
31+
+Object payload
32+
}
33+
class EventPublisher {
34+
<<interface>>
35+
+publish(DomainEvent) void
36+
}
37+
class KafkaAdapter {
38+
-Producer producer
39+
+publish(DomainEvent) void
40+
}
41+
class EventHandler {
42+
+handle(DomainEvent) void
43+
}
44+
45+
EventPublisher <|-- KafkaAdapter
46+
DomainEvent <-- EventHandler
47+
DomainEvent <-- EventPublisher
48+
```
49+
50+
---
51+
52+
## 1. Idempotent Consumers (Crucial)
53+
54+
Because Kafka or RabbitMQ may deliver the same message twice (e.g., during a consumer rebalance), handlers must be purely idempotent. Processing the exact same `eventId` twice MUST NOT duplicate the business outcome (e.g., charging a credit card twice).
55+
56+
### ❌ Bad Practice
57+
```typescript
58+
class PaymentEventHandler {
59+
async handle(event: OrderCreatedEvent) {
60+
// ❌ Blindly processing the payment every time the event is received!
61+
// A duplicate Kafka message will charge the user again.
62+
await this.stripeService.charge(event.payload.amount);
63+
await this.db.payments.insert({ orderId: event.aggregateId, status: 'PAID' });
64+
}
65+
}
66+
```
67+
68+
### ✅ Best Practice
69+
```typescript
70+
class PaymentEventHandler {
71+
async handle(event: OrderCreatedEvent) {
72+
// ✅ 1. Check if we've already processed this specific event ID
73+
const alreadyProcessed = await this.db.processedEvents.exists(event.eventId);
74+
if (alreadyProcessed) {
75+
this.logger.warn(`Event ${event.eventId} already processed. Skipping.`);
76+
return;
77+
}
78+
79+
// ✅ 2. Execute business logic idempotently
80+
await this.db.transaction(async (tx) => {
81+
await this.stripeService.charge(event.payload.amount);
82+
await tx.payments.insert({ orderId: event.aggregateId, status: 'PAID' });
83+
84+
// ✅ 3. Record the event ID to prevent duplicate processing
85+
await tx.processedEvents.insert({ id: event.eventId, processedAt: new Date() });
86+
});
87+
}
88+
}
89+
```
90+
91+
---
92+
93+
## 2. The Transactional Outbox Pattern
94+
95+
To solve the "Dual-Write Problem" (saving state to the DB and publishing to Kafka reliably), we use an Outbox table. If the application crashes after saving to the DB but before publishing to Kafka, the message is permanently lost.
96+
97+
### ❌ Bad Practice
98+
```typescript
99+
class OrderService {
100+
async createOrder(data: CreateOrderDto) {
101+
// ❌ Dual-write problem!
102+
const order = await this.db.orders.insert(data); // 1. Save to DB
103+
104+
// If the server crashes HERE, the event is never published,
105+
// and downstream services never know the order was created.
106+
107+
await this.kafkaProducer.send('orders.created', order); // 2. Publish to Broker
108+
}
109+
}
110+
```
111+
112+
### ✅ Best Practice
113+
```typescript
114+
class OrderService {
115+
async createOrder(data: CreateOrderDto) {
116+
// ✅ The Outbox Pattern: Save BOTH the business entity and the event
117+
// in the exact same ACID database transaction.
118+
await this.db.transaction(async (tx) => {
119+
const order = await tx.orders.insert(data);
120+
121+
const outboxEvent = {
122+
aggregateType: 'Order',
123+
aggregateId: order.id,
124+
eventType: 'OrderCreated',
125+
payload: JSON.stringify(order),
126+
createdAt: new Date(),
127+
};
128+
129+
await tx.outbox.insert(outboxEvent); // Saves strictly to a local DB table
130+
});
131+
132+
// A separate background process (e.g., Debezium or a Polling Worker)
133+
// reads the 'outbox' table and safely publishes to Kafka.
134+
}
135+
}
136+
```
137+
138+
---
139+
140+
## 3. Strictly Typed Schemas (Schema Registry)
141+
142+
Microservices evolve independently. If a publisher changes the shape of a JSON event payload, all downstream subscribers will break. Always enforce a Schema Registry (Avro, Protobuf, JSON Schema) for all events.
143+
144+
### ✅ Best Practice (Avro Example)
145+
```typescript
146+
// 1. Define a strict Avro schema for the event
147+
const orderCreatedSchema = {
148+
type: 'record',
149+
name: 'OrderCreated',
150+
fields: [
151+
{ name: 'orderId', type: 'string' },
152+
{ name: 'amount', type: 'double' },
153+
{ name: 'customerId', type: 'string' }
154+
// Enforces backward compatibility rules via Confluent Schema Registry
155+
]
156+
};
157+
158+
class OrderKafkaPublisher {
159+
async publish(event: DomainEvent) {
160+
// 2. The payload is validated and serialized against the Schema Registry
161+
// before it ever reaches the Kafka topic.
162+
const encodedPayload = await this.schemaRegistry.encode(
163+
'orders.created-value',
164+
event.payload
165+
);
166+
167+
await this.producer.send({
168+
topic: 'orders.created',
169+
messages: [{ key: event.aggregateId, value: encodedPayload }]
170+
});
171+
}
172+
}
173+
```
174+
175+
---
176+
177+
<div align="center">
178+
[Back to Main Blueprint](./readme.md) <br><br>
179+
<b>Master these implementation constraints to guarantee asynchronous consistency! 🛠️</b>
180+
</div>

0 commit comments

Comments
 (0)