Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
57 changes: 45 additions & 12 deletions CLAUDE.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,7 @@ role-based access control (ADMIN/MEMBER), and PostgreSQL persistence with Flyway
- **Java 25** with virtual threads (Project Loom)
- **Spring Boot 4.0** with Spring Framework 7.0
- **Spring Modulith 2.0** for modular architecture
- **Apache Kafka** for event externalization (Spring Modulith integration)
- **PostgreSQL** with Flyway migrations
- **gRPC** alongside REST APIs

Expand Down Expand Up @@ -187,7 +188,8 @@ notification ──→ shared ←── user
**Notification Module** (`org.nkcoder.notification`):

- `NotificationService` - Public API for sending notifications
- `application/UserEventListener` - Listens to UserRegisteredEvent
- `application/ApplicationEventListener` - In-process listener for domain events (sends emails)
- `application/KafkaEventListener` - Kafka consumer for externalized events

**Shared Module** (`org.nkcoder.shared`):

Expand Down Expand Up @@ -329,34 +331,54 @@ PATCH /api/users/{userId}/password - Reset password (admin only)

### Event-Driven Communication

Modules communicate via domain events using Spring Modulith's event infrastructure:
Modules communicate via domain events using Spring Modulith's event infrastructure with **Kafka externalization**.

**Event Externalization**: Domain events marked with `@Externalized` are automatically published to Kafka topics:

| Event | Kafka Topic | Description |
|-------|-------------|-------------|
| `UserRegisteredEvent` | `user.registered` | Published when user completes registration |
| `OtpRequestedEvent` | `user.otp.requested` | Published when user requests OTP |

**Publishing Events** (in User module):

```java
// In AuthApplicationService after registration
domainEventPublisher.publish(new UserRegisteredEvent(user.getId(),user.
domainEventPublisher.publish(new UserRegisteredEvent(user.getId(), user.getEmail(), user.getName()));
```

getEmail(),user.
**Event Definition with Kafka Externalization**:

getName()));
```java
@Externalized("user.registered") // Kafka topic name
public record UserRegisteredEvent(UUID userId, String email, String userName, LocalDateTime occurredOn)
implements DomainEvent {}
```

**Listening to Events** (in Notification module):

```java

// In-process listener (immediate, same JVM)
@Component
public class UserEventListener {
public class ApplicationEventListener {
@ApplicationModuleListener
public void onUserRegistered(UserRegisteredEvent event) {
notificationService.sendWelcomeEmail(event.email(), event.userName());
}
}

// Kafka consumer (for external/distributed processing)
@Component
public class KafkaEventListener {
@KafkaListener(topics = "user.registered", groupId = "notification-service")
public void onUserRegistered(String message) {
// Decode Base64 and deserialize JSON
}
}
```

**Event Publication Table**: Spring Modulith persists events to `event_publication` table for reliable delivery (
transactional outbox pattern).
transactional outbox pattern). Events are stored before being sent to Kafka, ensuring at-least-once delivery.

### Configuration Management

Expand All @@ -378,6 +400,7 @@ JWT_REFRESH_SECRET=<min 64 bytes for HS512>
JWT_ACCESS_EXPIRES_IN=15m
JWT_REFRESH_EXPIRES_IN=7d
CLIENT_URL=http://localhost:3000
SPRING_KAFKA_BOOTSTRAP_SERVERS=kafka:9092
```

**Configuration Binding**:
Expand Down Expand Up @@ -564,9 +587,10 @@ class ModulithArchitectureTest {

1. Create event record in `shared/kernel/domain/event/` (if cross-module) or `{module}/domain/event/` (if
module-internal)
2. Inject `DomainEventPublisher` in your service
3. Call `domainEventPublisher.publish(event)` after business logic
4. Create `@ApplicationModuleListener` in consuming module
2. Add `@Externalized("topic-name")` annotation to publish to Kafka
3. Inject `DomainEventPublisher` in your service
4. Call `domainEventPublisher.publish(event)` after business logic
5. Create `@ApplicationModuleListener` in consuming module (in-process) and/or `@KafkaListener` (Kafka consumer)

**Database Schema Change**:

Expand Down Expand Up @@ -604,11 +628,20 @@ class ModulithArchitectureTest {
- Cross-module events go in `shared.kernel.domain.event/`
- Use `@ApplicationModuleListener` for reliable event handling (auto-retry, persistence)

**Kafka Integration**:

- Events with `@Externalized` annotation are automatically published to Kafka topics
- Consumer group: `notification-service`
- Messages are Base64-encoded JSON
- Kafka ports: `9092` (internal Docker), `29092` (external/host)
- Topics are auto-created on first publish

**Future Microservice Extraction**:
When ready to extract a module as a microservice:

1. Events become messages (Kafka/RabbitMQ)
1. Events are already externalized to Kafka - no change needed
2. REST/gRPC calls replace direct method calls
3. Module's `infrastructure/` adapters change, domain stays the same
4. Database can be separated per module
5. Kafka consumers in extracted service continue to receive events

8 changes: 4 additions & 4 deletions Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
# Multi-stage Dockerfile for Spring Boot Application
# =============================================================================
# Build: docker build -t user-service .
# Run: docker run -p 3001:3001 -p 9090:9090 user-service
# Run: docker run -p 8080:8080 -p 9090:9090 user-service
# =============================================================================

# -----------------------------------------------------------------------------
Expand All @@ -14,7 +14,7 @@ WORKDIR /app

# Copy Gradle wrapper and build files first (for layer caching)
COPY gradle/ gradle/
COPY gradlew build.gradle.kts ./
COPY gradlew build.gradle.kts settings.gradle.kts gradle.properties ./

# Make gradlew executable
RUN chmod +x ./gradlew
Expand Down Expand Up @@ -49,11 +49,11 @@ RUN chown -R appuser:appgroup /app
USER appuser

# Expose REST and gRPC ports
EXPOSE 3001 9090
EXPOSE 8080 9090

# Health check
HEALTHCHECK --interval=30s --timeout=10s --start-period=60s --retries=3 \
CMD wget --no-verbose --tries=1 --spider http://localhost:3001/actuator/health || exit 1
CMD wget --no-verbose --tries=1 --spider http://localhost:8080/actuator/health || exit 1

# JVM optimizations for containers
ENV JAVA_OPTS="-XX:+UseContainerSupport \
Expand Down
3 changes: 2 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@ A comprehensive user authentication and management service featuring OAuth2, OTP
| **Passwordless** | One-Time Password (OTP) login flow via email |
| **Governance** | Role-based access control (MEMBER, ADMIN), profile management |
| **Architecture** | **Modular Monolith** (Spring Modulith), DDD, Event-driven communication |
| **Messaging** | **Apache Kafka** for event externalization (Spring Modulith integration)|
| **Performance** | **Java 25 Virtual Threads**, gRPC for high-speed communication |

## Documentation Hub
Expand All @@ -28,7 +29,7 @@ A comprehensive user authentication and management service featuring OAuth2, OTP

### Prerequisites
- **Java 25** & Gradle 8+
- PostgreSQL 17 or Docker
- PostgreSQL 17, Apache Kafka, or Docker

### Running Locally
1. Copy `.env.example` to `.env` and configure secrets (JWT, Mail, OAuth2).
Expand Down
2 changes: 1 addition & 1 deletion auto/docker_logs
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
#!/usr/bin/env sh

docker compose -f docker-compose-all.yml logs -f --tail 100
docker compose -f docker-compose-all.yml logs -f --tail 50
8 changes: 7 additions & 1 deletion build.gradle.kts
Original file line number Diff line number Diff line change
Expand Up @@ -42,6 +42,7 @@ dependencies {
implementation("org.springdoc:springdoc-openapi-starter-webmvc-ui:3.0.0")
implementation("org.springframework.modulith:spring-modulith-starter-core")
implementation("org.springframework.modulith:spring-modulith-starter-jpa")
implementation("org.springframework.modulith:spring-modulith-events-kafka")

// Database
implementation("org.springframework.boot:spring-boot-starter-flyway")
Expand All @@ -58,6 +59,9 @@ dependencies {
annotationProcessor("org.springframework.boot:spring-boot-configuration-processor")
developmentOnly("org.springframework.boot:spring-boot-docker-compose")

// Messaging
implementation("org.springframework.kafka:spring-kafka")

// Testing
testImplementation("org.springframework.boot:spring-boot-starter-webmvc-test")
testImplementation("org.springframework.boot:spring-boot-starter-webflux-test") // For WebTestClient
Expand All @@ -67,7 +71,9 @@ dependencies {
testImplementation("org.junit.jupiter:junit-jupiter:5.13.3")
testImplementation("org.testcontainers:junit-jupiter")
testImplementation("org.testcontainers:postgresql")
testImplementation("org.testcontainers:kafka")
testImplementation("org.springframework.modulith:spring-modulith-starter-test")
testImplementation("org.springframework.kafka:spring-kafka-test")

// gRPC and Protobuf
implementation("io.grpc:grpc-netty-shaded:1.77.0")
Expand Down Expand Up @@ -124,7 +130,7 @@ tasks.register("runLocal") {
// JVM optimization for microservices
tasks.named<org.springframework.boot.gradle.tasks.run.BootRun>("bootRun") {
jvmArgs = listOf(
"-Xms256m", "-Xmx512m", "-XX:+UseG1GC", "-XX:+UseStringDeduplication"
"-Xms512m", "-Xmx1024m", "-XX:+UseG1GC", "-XX:+UseStringDeduplication"
)
// Pass environment variables at execution time (not configuration time)
// This ensures .env variables sourced by auto/run are available
Expand Down
76 changes: 68 additions & 8 deletions docker-compose-all.yml
Original file line number Diff line number Diff line change
Expand Up @@ -3,14 +3,15 @@
# =============================================================================
# Simulates dev/prod environment with all services running in containers.
#
# Usage:
# Start: docker compose -f docker-compose-all.yml up -d
# Stop: docker compose -f docker-compose-all.yml down
# Logs: docker compose -f docker-compose-all.yml logs -f user-service
# Rebuild: docker compose -f docker-compose-all.yml up -d --build
#
# For production, use external secrets management (Vault, AWS Secrets Manager)
# instead of environment variables in this file.
#
# For container communications: App, kafka and PostgreSQL are all running inside Docker.
# Kafka has two listeners: 9092 (internal) and 29092 (external)
# Container app connects via `kafka:9092` (Docker network)
# Host debugging via `localhost:29092`
# KAFKA_LISTENERS: PLAINTEXT://:9092,EXTERNAL://:29092
# KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092,EXTERNAL://localhost:29092
# =============================================================================

services:
Expand All @@ -23,11 +24,13 @@ services:
dockerfile: Dockerfile
container_name: user-application
ports:
- "3001:3001" # REST API
- "8080:8080" # REST API
- "9090:9090" # gRPC API
depends_on:
postgres:
condition: service_healthy
kafka:
condition: service_healthy
environment:
# Profile: use 'dev' for development simulation, 'prod' for production
- SPRING_PROFILES_ACTIVE=dev
Expand All @@ -37,6 +40,9 @@ services:
- DATABASE_USERNAME=app_user
- DATABASE_PASSWORD=${DB_PASSWORD:-changeme_in_production}

# Kafka connection
- SPRING_KAFKA_BOOTSTRAP_SERVERS=kafka:9092

# JWT secrets - MUST be overridden in production!
# Generate with: openssl rand -base64 64
- JWT_ACCESS_SECRET=${JWT_ACCESS_SECRET:-dev-only-access-secret-key-must-be-at-least-64-bytes-long-for-hs512}
Expand All @@ -47,8 +53,18 @@ services:

# JVM options for container environment
- JAVA_OPTS=-XX:+UseContainerSupport -XX:MaxRAMPercentage=75.0

# Mail
- MAIL_USERNAME=${MAIL_USERNAME}
- MAIL_PASSWORD=${MAIL_PASSWORD}

# OAuth 2
- GOOGLE_CLIENT_ID=${GOOGLE_CLIENT_ID}
- GOOGLE_CLIENT_SECRET=${GOOGLE_CLIENT_SECRET}
- GITHUB_CLIENT_ID=${GITHUB_CLIENT_ID}
- GITHUB_CLIENT_SECRET=${GITHUB_CLIENT_SECRET}
healthcheck:
test: [ "CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:3001/actuator/health" ]
test: [ "CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:8080/actuator/health" ]
interval: 30s
timeout: 10s
retries: 3
Expand All @@ -65,6 +81,48 @@ services:
networks:
- app-network

# ---------------------------------------------------------------------------
# Apache Kafka
# ---------------------------------------------------------------------------
kafka:
image: apache/kafka:4.1.1
container_name: user-application-kafka
hostname: kafka
ports:
- "29092:29092" # External port for debugging (remove in production)
environment:
KAFKA_NODE_ID: 1
KAFKA_PROCESS_ROLES: broker,controller
KAFKA_LISTENERS: PLAINTEXT://:9092,CONTROLLER://:9093,EXTERNAL://:29092
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092,EXTERNAL://localhost:29092
KAFKA_CONTROLLER_LISTENER_NAMES: CONTROLLER
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT,EXTERNAL:PLAINTEXT
KAFKA_CONTROLLER_QUORUM_VOTERS: 1@kafka:9093
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
KAFKA_LOG_DIRS: /var/lib/kafka/data
volumes:
- kafka_data:/var/lib/kafka/data
healthcheck:
test: ["CMD-SHELL", "/opt/kafka/bin/kafka-broker-api-versions.sh --bootstrap-server localhost:9092 > /dev/null 2>&1"]
interval: 10s
timeout: 10s
retries: 5
start_period: 30s
restart: unless-stopped
deploy:
resources:
limits:
cpus: '1'
memory: 1G
reservations:
cpus: '0.25'
memory: 512M
networks:
- app-network

# ---------------------------------------------------------------------------
# PostgreSQL Database
# ---------------------------------------------------------------------------
Expand Down Expand Up @@ -102,6 +160,8 @@ services:
volumes:
postgres_data:
name: user-service-postgres-data
kafka_data:
name: user-service-kafka-data

networks:
app-network:
Expand Down
44 changes: 35 additions & 9 deletions docker-compose.yml
Original file line number Diff line number Diff line change
Expand Up @@ -5,16 +5,12 @@
# spring.docker.compose.enabled=true (in application-local.yml)
#
# This file is auto-detected and managed by Spring Boot.
# The application runs on your host machine, only PostgreSQL runs in Docker.
# The application runs on your host machine, only PostgreSQL and Kafka runs in Docker.
#
# Usage:
# ./gradlew bootRun --args='--spring.profiles.active=local'
# (Spring Boot automatically starts/stops this compose file)
#
# Manual control:
# Start: docker compose up -d
# Stop: docker compose down
# Reset: docker compose down -v (removes data volume)
# For Kafka:
# Kafka has one listener on port 29092, host app connects via `localhost:29092`
# AFKA_LISTENERS: PLAINTEXT://:29092
# KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:29092
# =============================================================================

services:
Expand All @@ -35,6 +31,36 @@ services:
timeout: 5s
retries: 5

kafka:
image: apache/kafka:4.1.1
container_name: user-service-kafka
hostname: kafka
ports:
- "29092:29092"
environment:
KAFKA_NODE_ID: 1
KAFKA_PROCESS_ROLES: broker,controller
KAFKA_LISTENERS: PLAINTEXT://:29092,CONTROLLER://:9093
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:29092
KAFKA_CONTROLLER_LISTENER_NAMES: CONTROLLER
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT
KAFKA_CONTROLLER_QUORUM_VOTERS: 1@kafka:9093
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
KAFKA_LOG_DIRS: /var/lib/kafka/data
volumes:
- kafka_data:/var/lib/kafka/data
healthcheck:
test: ["CMD-SHELL", "/opt/kafka/bin/kafka-broker-api-versions.sh --bootstrap-server localhost:29092 > /dev/null 2>&1"]
interval: 10s
timeout: 10s
retries: 5
start_period: 30s

volumes:
postgres_data:
name: user-service-local-postgres-data
kafka_data:
name: user-service-local-kafka-data
Loading
Loading