Skip to content

[Infra] Ec2 2대를 활용한 분리 아키텍처 구현 #39

Open
dPwls0125 wants to merge 13 commits intodevelopfrom
feat/infra
Open

[Infra] Ec2 2대를 활용한 분리 아키텍처 구현 #39
dPwls0125 wants to merge 13 commits intodevelopfrom
feat/infra

Conversation

@dPwls0125
Copy link
Copy Markdown
Owner

@dPwls0125 dPwls0125 commented Apr 8, 2026

ec2 2대에 DB와 Apps를 분리 배포하기 위한 설정을 진행하였습니다.

1. 분리된 도커 컴포즈 파일

위치 파일 이름 포함된 서비스 비고
EC2 1 (App) docker-compose.yml Nginx, api-server, payment-server, postpayment-server, status-server, zookeeper, kafka DB 관련 환경변수를 외부 .env에서 주입받도록 처리. Nginx 리버스 프록시 추가 (:80 포트)
EC2 2 (DB) docker-compose.db.yml mysql, mysql-payment, redis 데이터베이스와 캐시 서버만 격리

2. Spring Profile 분리 (local ↔ prod)

api-server, Payment, status-server 3개의 모듈에 대해 application-prod.yml을 추가했습니다.

  • 로컬 (SPRING_PROFILE 미지정):
    localhost 의 DB, create-drop 또는 update 등 사용 중인 로컬 설정이 유지됨
  • 운영 EC2 (SPRING_PROFILE=prod):
    운영 DB 주소로 연결되며, SQL 쿼리 로깅(DEBUG)을 WARN으로 낮추고 DDL 설정을 안전하게 운영 모드(update 등 데이터 유지)로 덮어씁니다.

4. 백그라운드 자동 배포 프로세스 환경 추가

  • Nginx 리버스 프록시추가
    • Client (:80)Nginxapi-server (:8080)
  • CD 구성
    • develop 브랜치에 코드가 푸시되면 Nginx 및 애플리케이션 코드를 포함한 App Server(EC2 1)에 SSH로 붙어 최신 버전을 자동으로 재배포하도록 설정했습니다.
    • DB는 코드가 수정되더라도 영향받지 않도록 분리 배포했습니다.

Summary by CodeRabbit

  • 새로운 기능

    • 실시간 알림(SSE) 구독 처리 추가
    • EC2로의 수동/푸시 배포 워크플로우 추가
    • Nginx 리버스 프록시 구성 추가
    • DB 전용 Docker Compose 구성 추가
    • EC2 앱/DB 초기화 자동화 스크립트 추가
  • 개선 사항

    • 프로덕션(prod) 설정 및 로깅 수준 정리
    • 인프라 재구성: 메인 Compose 재정비(데이터베이스/Redis 분리, Kafka/Zookeeper 포함)
    • 결제 이벤트에 매장(storeId) 및 금액(amount) 필드 반영

- reverse proxy로 설정
- 한번에 10254개의 요청을 받음
- 클라이언트 정보 및 거쳐온 proxy들에 대한 정보를 받아오도록 설정
- web-app배포환경과 db 배포환경을 각각의 ec2로 분리함.
- db전용 ec2: Mysql(api-server), mysql-payment, redis
- web-ap전용 ec2 : api-server, payment, postPayment, status-server, kafka
@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Apr 8, 2026

Warning

Rate limit exceeded

@dPwls0125 has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 9 minutes and 5 seconds before requesting another review.

Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 9 minutes and 5 seconds.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 8412d7f8-d30a-4d19-9c67-c46b791ea4f8

📥 Commits

Reviewing files that changed from the base of the PR and between 208ba8e and 436eac3.

📒 Files selected for processing (4)
  • Payment/src/main/java/personal/yejin/kafkaListner/PaymentRequestListener.java
  • Payment/src/test/java/PaymentEventHandlerUnitTest.java
  • docker-compose.yml
  • scripts/setup-db.sh
📝 Walkthrough

Walkthrough

프로덕션 배포 인프라(워크플로우, 스크립트, Docker Compose, Nginx)와 각 서비스의 prod 프로파일을 추가하고, 결제 이벤트 스키마에 storeId를 도입하여 PostPayment에 SSE 기반 알림 서비스와 Kafka 컨슈머 그룹/핸들러 변경을 도입했습니다.

Changes

Cohort / File(s) Summary
배포 자동화
​.github/workflows/deploy.yml, scripts/setup-app.sh, scripts/setup-db.sh
EC2 SSH 배포 워크플로우 추가 및 앱/DB 서버 초기 프로비저닝 스크립트 추가(스왑 설정, Docker/Docker Compose 설치, 리포지토리 처리, .env 생성).
컨테이너 오케스트레이션 & 프록시
docker-compose.yml, docker-compose.db.yml, nginx/nginx.conf
기존 MySQL/Redis 서비스 제거 및 DB 전용 Compose 파일 추가, Kafka/Zookeeper 재배치, Nginx 리버스프록시 추가, 서비스 환경변수로 프로파일/DB/Redis 호스트 설정 변경.
프로덕션 설정 (Spring)
Payment/src/main/resources/application-prod.yml, api-server/src/main/resources/application-prod.yml, status-server/src/main/resources/application-prod.yml
prod 프로파일용 설정 파일 추가: JPA/Hibernate SQL 출력 및 포맷 비활성화, 로깅 레벨 조정(루트 INFO, Hibernate/Spring ORM 등 WARN).
결제 이벤트 및 알림 흐름
common-event/src/main/java/personal/yejin/PaymentResultEvent.java, Payment/src/.../handler/PaymentCompletedInternalEvent.java, Payment/src/.../handler/PaymentEventHandler.java, PostPayment/src/main/java/personal/yejin/NotifyService.java, PostPayment/src/main/java/personal/yejin/PostPaymentListener.java
PaymentResultEventstoreId 필드 추가; 내부 이벤트 레코드 필드명/구성 변경(finalPriceamount, storeId 추가). PostPayment에 SSE NotifyService 추가(가게별 SseEmitter 관리, subscribe()가 현재 null 반환) 및 Kafka 리스너 그룹명·메서드명 변경(payment-result-groupnotify-to-store).
인프라 파일 추가/변경 요약
docker-compose.db.yml, scripts/*, nginx/nginx.conf, docker-compose.yml
DB/Redis 전용 Compose, 셋업 스크립트, Nginx 구성 등 인프라 관련 파일 대거 추가/수정.
사소한 코드 주석/포맷
api-server/src/main/java/.../NotificationService.java
주석을 TODO로 변경 및 Javadoc/줄바꿈 정리(기능 영향 없음).

Sequence Diagram(s)

sequenceDiagram
    participant Kafka as Kafka(topic: payment-result)
    participant PostPayment as PostPayment-service
    participant NotifySvc as NotifyService (SSE manager)
    participant StoreClient as Store (SSE client)

    Kafka->>PostPayment: publish PaymentResultEvent(userId, storeId, paymentStatus)
    PostPayment->>NotifySvc: consumePaymentResultAndNotifyToStore(event)
    NotifySvc->>NotifySvc: lookup SseEmitter by storeId
    alt emitter exists
        NotifySvc->>StoreClient: emit SSE -> payment result
    else no emitter
        NotifySvc->>PostPayment: log or ignore (no connected client)
    end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Poem

🐰 깡충, 깡충 새 배포 깃발 들고,
코드와 컨테이너 길에 모여들어,
카프카가 울리고 SSE가 속삭여,
Nginx 길을 닦아 모두 만나게 하네,
배포 성공, 당근 축하 파티! 🥕✨

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 75.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed PR 제목이 주요 변경 사항을 명확하게 요약하고 있습니다. EC2 2대 분리 아키텍처 구현이라는 핵심 목표가 잘 드러나 있으며, 대량의 인프라 파일(docker-compose.yml, docker-compose.db.yml, 배포 워크플로우, 설정 파일 등) 추가 및 코드 수정과 일관성이 있습니다.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feat/infra

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 9

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (3)
PostPayment/src/main/java/personal/yejin/PostPaymentListener.java (1)

16-22: ⚠️ Potential issue | 🔴 Critical

Kafka 리스너 메서드 본문이 비어있어 알림 기능이 동작하지 않음

consumePaymentResultAndNotifyToStore 메서드가 이벤트를 수신하지만 아무런 처리를 하지 않습니다. 메서드명과 consumer group(notify-to-store)에서 스토어에 알림을 보내야 함을 암시하지만, 실제 구현이 누락되었습니다.

또한 kafkaTemplate이 주입되어 있지만 사용되지 않고 있으며, 스토어 알림을 위해서는 NotifyService를 주입하여 사용해야 할 것으로 보입니다.

🔧 제안하는 구현 예시
 `@RequiredArgsConstructor`
 `@Component`
 public class PostPaymentListener
 {
-
-    private final KafkaTemplate<String, Object> kafkaTemplate;
+    private final NotifyService notifyService;

     // 결제 후
     `@KafkaListener`(topics = "payment-result", groupId = "notify-to-store")
     public void consumePaymentResultAndNotifyToStore(PaymentResultEvent event) {
-
-
-
+        notifyService.notifyStore(event.storeId(), event);
     }
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@PostPayment/src/main/java/personal/yejin/PostPaymentListener.java` around
lines 16 - 22, The Kafka listener consumePaymentResultAndNotifyToStore in
PostPaymentListener is empty so no store notification is sent; implement it to
build a store notification from the incoming PaymentResultEvent (validate event
!= null, extract order/store ids and status), then send the notification either
by calling an injected NotifyService (e.g., notifyService.notifyStore(...)) or
by using the existing kafkaTemplate.send(...) to the appropriate store-notify
topic; ensure you inject NotifyService if not present, handle exceptions with
logging, and acknowledge/skip malformed events.
common-event/src/main/java/personal/yejin/PaymentResultEvent.java (1)

8-18: ⚠️ Potential issue | 🔴 Critical

storeId 필드 추가로 인한 PaymentEventHandler 업데이트 필요

PaymentResultEvent record에 storeId 필드가 추가되어 canonical constructor 시그니처가 변경되었습니다. 하지만 Payment/src/main/java/personal/yejin/handler/PaymentEventHandler.java:35-44에서 이 이벤트를 생성할 때 새 필드가 누락되었습니다.

PaymentEventHandler의 생성자 호출을 다음과 같이 수정하세요:

  • userId() 다음에 storeId 인자 추가 (source event에서 가져오기)
  • finalPrice()amount로 수정

현재 constructor 호출이 총 8개 인자만 전달하고 있으나, record는 10개 필드를 요구합니다.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@common-event/src/main/java/personal/yejin/PaymentResultEvent.java` around
lines 8 - 18, PaymentResultEvent's canonical constructor signature changed due
to adding storeId and renaming finalPrice to amount; update the
PaymentEventHandler event creation (in class PaymentEventHandler) to pass the
new storeId argument immediately after userId() (pull storeId from the source
event) and replace finalPrice() with amount, ensuring the
PaymentResultEvent(...) call supplies all 10 fields required by the record.
docker-compose.yml (1)

16-22: ⚠️ Potential issue | 🟠 Major

Redis 배치 및 volumes 선언 확인 필요

두 가지 이슈가 있습니다:

  1. PR 목표와 불일치: PR 설명에 따르면 Redis는 EC2 2 (DB)의 docker-compose.db.yml에 배치되어야 하지만, 현재 App 서버 compose 파일에 포함되어 있습니다.

  2. volumes 선언 누락: redis-data:/data 볼륨을 참조하고 있지만, 파일 끝에 top-level volumes 선언이 없습니다. 이로 인해 docker-compose 실행 시 오류가 발생할 수 있습니다.

+volumes:
+  redis-data:
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docker-compose.yml` around lines 16 - 22, The redis service is declared in
the wrong compose and references a missing top-level volume; move the service
block named "redis" (image: redis:7, container_name: redis-delivery, ports,
volumes) out of the App compose and into the DB compose, and add a top-level
volumes entry for "redis-data" in that DB compose (so the service uses
redis-data:/data and docker-compose has a matching volumes: redis-data:
declaration). Ensure you remove the redis service from the App compose after
moving it.
🧹 Nitpick comments (9)
PostPayment/src/main/java/personal/yejin/NotifyService.java (1)

17-35: onError 핸들러 및 초기 연결 확인 이벤트 누락

api-server/.../NotificationService.java의 구현과 비교했을 때, 다음이 누락되어 있습니다:

  1. onError 핸들러: 에러 발생 시 emitter가 정리되지 않아 리소스 누수 가능
  2. 초기 더미 이벤트 전송: SSE 연결 직후 이벤트를 보내지 않으면 일부 환경에서 503 에러 발생 가능
  3. 스토어에 알림을 보내는 메서드: PostPaymentListener에서 호출할 알림 전송 메서드가 없음
♻️ 개선된 구현 제안
     public SseEmitter subscribe(Long storeId) {
         SseEmitter emitter = new SseEmitter(DEFAULT_TIMEOUT);
         emitters.put(storeId, emitter);

         emitter.onCompletion(() -> {
             log.info("SSE emitter 완료됨. storeId={}", storeId);
             emitters.remove(storeId);
         });

         emitter.onTimeout(() -> {
             log.info("SSE emitter 타임아웃됨. storeId={}", storeId);
+            emitter.complete();
             emitters.remove(storeId);
         });

+        emitter.onError((e) -> {
+            log.error("SSE emitter 에러. storeId={}", storeId, e);
+            emitter.completeWithError(e);
+            emitters.remove(storeId);
+        });
+
+        // 503 에러 방지를 위한 초기 이벤트 발송
+        sendToStore(storeId, "INIT", "Event stream created. [storeId=" + storeId + "]");

-        return null;
+        return emitter;
     }
+
+    public void notifyStore(Long storeId, Object data) {
+        SseEmitter emitter = emitters.get(storeId);
+        if (emitter != null) {
+            sendToStore(storeId, "PAYMENT_RESULT", data);
+        }
+    }
+
+    private void sendToStore(Long storeId, String eventName, Object data) {
+        SseEmitter emitter = emitters.get(storeId);
+        if (emitter != null) {
+            try {
+                emitter.send(SseEmitter.event()
+                        .name(eventName)
+                        .data(data));
+            } catch (Exception e) {
+                log.error("SSE 전송 실패. storeId={}", storeId, e);
+                emitters.remove(storeId);
+            }
+        }
+    }
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@PostPayment/src/main/java/personal/yejin/NotifyService.java` around lines 17
- 35, subscribe currently creates and stores an SseEmitter but returns null and
lacks error handling, initial keep-alive event, and a public send method; update
subscribe (in NotifyService.subscribe) to return the created emitter, add
emitter.onError(throwable -> { log.error(..., throwable);
emitters.remove(storeId); }) and ensure the onTimeout/onCompletion log messages
are distinct, send an initial dummy/event after storing the emitter (e.g.,
emitter.send(...) or SseEmitter.event()) to avoid 503s, and add a public method
like notifyStore(Long storeId, Object payload) (or sendNotification/sendEvent)
that looks up emitters.get(storeId) and calls emitter.send(payload) inside
try/catch removing the emitter on failure so PostPaymentListener can call it.
scripts/setup-db.sh (2)

36-36: read 명령에 -r 옵션을 추가하세요.

-r 옵션 없이 read를 사용하면 비밀번호에 백슬래시가 포함된 경우 의도치 않게 변환될 수 있습니다.

🔧 제안 수정
-    read -sp "MySQL Password Config: " MYSQL_PW
+    read -rsp "MySQL Password Config: " MYSQL_PW
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/setup-db.sh` at line 36, 현재 스크립트에서 read -sp "MySQL Password Config: "
MYSQL_PW을 사용하고 있어 백슬래시가 포함된 비밀번호가 의도치 않게 해석될 수 있으니 read에 -r 옵션을 추가해 raw 모드로 읽도록
변경하십시오; 즉 스크립트의 해당 read 호출(문자열 "MySQL Password Config: "과 변수 MYSQL_PW을 사용하는 부분)을
찾아 -r을 포함하도록 수정하면 됩니다.

17-17: $USER 변수를 따옴표로 감싸세요.

단어 분리(word splitting)와 glob 확장을 방지하기 위해 변수를 따옴표로 감싸야 합니다.

🔧 제안 수정
-    sudo usermod -aG docker $USER
+    sudo usermod -aG docker "$USER"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/setup-db.sh` at line 17, The usermod invocation uses an unquoted
shell variable which can cause word-splitting or glob expansion; update the
command that calls usermod (the line invoking sudo usermod -aG docker $USER) to
quote the variable so it becomes sudo usermod -aG docker "$USER", ensuring $USER
is treated as a single literal argument.
scripts/setup-app.sh (2)

36-37: read 명령에 -r 옵션을 추가하세요.

IP 주소와 비밀번호 입력 시 백슬래시 처리 문제를 방지하기 위해 -r 옵션을 사용하세요.

🔧 제안 수정
-    read -p "DB Server Private IP (EC2 `#2` IP): " DB_IP
-    read -sp "MySQL Password: " MYSQL_PW
+    read -rp "DB Server Private IP (EC2 `#2` IP): " DB_IP
+    read -rsp "MySQL Password: " MYSQL_PW
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/setup-app.sh` around lines 36 - 37, The two interactive reads for
DB_IP and MYSQL_PW (the read commands that populate DB_IP and MYSQL_PW) should
use the -r flag to prevent backslash escaping; update the read invocations (the
lines that currently call read -p "DB Server Private IP (EC2 `#2` IP): " DB_IP and
read -sp "MySQL Password: " MYSQL_PW) to include -r (e.g., read -r -p ... and
read -r -sp ...) so input is read raw and backslashes are preserved.

17-17: $USER 변수를 따옴표로 감싸세요.

setup-db.sh와 동일하게 단어 분리 방지를 위해 변수를 따옴표로 감싸야 합니다.

🔧 제안 수정
-    sudo usermod -aG docker $USER
+    sudo usermod -aG docker "$USER"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/setup-app.sh` at line 17, The usermod invocation uses an unquoted
shell variable which can suffer word-splitting; update the sudo usermod -aG
docker $USER call to quote the variable so it reads the same but uses "$USER"
(i.e., change the unquoted $USER to a quoted "$USER" in the usermod command) to
match setup-db.sh and prevent word-splitting issues.
nginx/nginx.conf (1)

10-28: 기본 리버스 프록시 설정이 적절합니다.

프록시 헤더(Host, X-Real-IP, X-Forwarded-For, X-Forwarded-Proto)가 올바르게 설정되어 있습니다.

운영 안정성을 위해 타임아웃 설정 추가를 고려해 보세요.

🔧 타임아웃 설정 추가 제안
     location / {
         proxy_pass http://api;
+        proxy_connect_timeout 60s;
+        proxy_read_timeout 60s;
+        proxy_send_timeout 60s;
         
         proxy_set_header Host $host; 
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@nginx/nginx.conf` around lines 10 - 28, Add explicit proxy timeout directives
inside the same server/location block (where proxy_pass http://api; and the
proxy_set_header lines live): configure proxy_connect_timeout,
proxy_send_timeout, and proxy_read_timeout with sensible values (e.g., seconds)
to avoid hanging upstream connections, and optionally set proxy_next_upstream
and proxy_cache_lock_timeout if you need failover/backpressure behavior; place
these directives alongside the existing proxy_set_header lines in the location /
block so they apply to requests routed to the api upstream.
docker-compose.db.yml (1)

42-48: Redis 인증 및 헬스체크가 누락되어 있습니다.

MySQL 서비스와 달리 Redis에는 헬스체크가 없고 인증도 설정되어 있지 않습니다. 운영 환경에서는 requirepass 설정을 권장합니다.

🔧 Redis 보안 및 헬스체크 추가 제안
   redis:
     image: redis:7
     container_name: redis-delivery
+    command: redis-server --requirepass ${REDIS_PASSWORD}
+    healthcheck:
+      test: ["CMD", "redis-cli", "-a", "${REDIS_PASSWORD}", "ping"]
+      interval: 10s
+      timeout: 5s
+      retries: 5
     ports:
       - "6379:6379"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docker-compose.db.yml` around lines 42 - 48, The redis service (service name
"redis", container_name "redis-delivery") lacks authentication and a
healthcheck; update the docker-compose service to require a password (introduce
an env var like REDIS_PASSWORD and start Redis with --requirepass
${REDIS_PASSWORD} or mount a redis.conf that sets requirepass) and add a
healthcheck that uses redis-cli PING with AUTH (e.g., run redis-cli -a
${REDIS_PASSWORD} PING and treat "PONG" as healthy) with appropriate
interval/retries/timeouts so orchestration can detect unhealthy Redis instances.
.github/workflows/deploy.yml (1)

22-24: 배포 실패 시 롤백 메커니즘이 없습니다.

docker-compose up --build 실패 시 서비스가 중단될 수 있습니다. 배포 전 이미지 태깅이나 롤백 스크립트를 고려해 보세요.

🔧 기본적인 에러 처리 추가 제안
           script: |
+            set -e
             cd ~/Food-Delivery-Service
-            git pull
+            git fetch origin
+            git checkout develop
+            git pull origin develop

             # 최신 소스로 컨테이너 다시 빌드 및 재시작 (DB는 DB서버에 있으므로 영향 없음)
-            docker-compose up -d --build
+            docker-compose up -d --build || {
+              echo "배포 실패: $(date)"
+              exit 1
+            }
             docker image prune -f
             echo "배포 완료: $(date)"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/deploy.yml around lines 22 - 24, Before running the
current "docker-compose up -d --build" step, capture and tag the running images
as a fallback and run the build/deploy in a way that checks for failure;
specifically: (1) add a step to record/tag current service images (use docker
inspect/image id and docker tag) so you have a "previous" tag; (2) replace the
direct "docker-compose up -d --build" invocation with a sequence that builds new
images (docker-compose build or docker build with explicit tags), then runs
docker-compose up -d, and checks the exit code; (3) if the deploy fails, run a
rollback step that retags the saved "previous" images back to their original
names and runs docker-compose up -d to restore the prior state; and (4) keep the
existing "docker image prune -f" but only run it after a successful deploy.
Ensure you reference and modify the existing commands "docker-compose up -d
--build" and "docker image prune -f" in the deploy workflow to implement the
tagging, build-with-check, and conditional rollback logic.
docker-compose.yml (1)

100-112: Kafka 데이터 영속성 고려 필요

Kafka 설정은 올바르지만, 볼륨이 설정되지 않아 컨테이너 재시작 시 메시지 및 오프셋 데이터가 손실됩니다. 운영 환경에서는 데이터 영속성을 위해 볼륨 추가를 권장합니다.

  kafka:
    image: confluentinc/cp-kafka:7.5.0
    container_name: kafka-delivery
    depends_on:
      - zookeeper
+   volumes:
+     - kafka-data:/var/lib/kafka/data
    environment:
      # ... existing config
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docker-compose.yml` around lines 100 - 112, The kafka service in
docker-compose.yml lacks a volume, so broker data (messages and offsets) will be
lost on container restart; update the kafka service definition (service name
"kafka") to mount a persistent Docker volume (e.g., a named volume like
kafka-data) into Kafka's data directory inside the container (referencing
KAFKA_LOG_DIR or the container path /var/lib/kafka/data depending on the image),
and add the corresponding top-level volumes entry to the compose file so the
volume is created and persisted across restarts; also consider doing the same
for the "zookeeper" service to persist ZK data.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.github/workflows/deploy.yml:
- Around line 18-25: The deploy script uses an unqualified git pull causing the
working copy (cloned by setup-app.sh as feat/infra) to pull origin/feat/infra
and mismatch deploys; update the script block in .github/workflows/deploy.yml to
explicitly fetch and switch to the intended deploy branch (the workflow triggers
for develop): run git fetch origin, git checkout develop (or git switch
develop), and then reset/pull from origin/develop (e.g., git reset --hard
origin/develop or git pull origin develop) before building so the working tree
always reflects origin/develop rather than the cloned feat/infra branch.

In
`@api-server/src/main/java/personal/yejin/foodDelivery/domain/notification/service/NotificationService.java`:
- Around line 15-16: The NotificationService currently stores SSE sessions in a
local ConcurrentHashMap (emitters), which will lose events in multi-instance or
rolling-deploy setups; replace or augment this with a shared pub/sub/fan-out so
SSE events reach all replicas: remove reliance on the local emitters map in
NotificationService (and associated add/remove/send logic) and implement a Redis
Pub/Sub (or other shared event bus) channel for broadcast + a per-instance local
emitter registry to forward received pubsub messages to SseEmitter entries;
alternatively, if you cannot complete pub/sub now, make the TODO explicit and
fail fast by adding a configuration check in NotificationService startup that
requires a single-replica mode flag (e.g., requireSingleReplica) and log/error
if not set so deployments cannot accidentally run multi-replica without Redis
Pub/Sub.

In `@api-server/src/main/resources/application-prod.yml`:
- Around line 6-8: In application-prod.yml change the JPA hibernate ddl-auto
setting away from unsafe "update" to a non-destructive production-safe value
(e.g., "validate" or "none") and remove any automatic schema-modifying behavior;
update any deployment docs/config so database migrations are performed via a
migration tool (Flyway or Liquibase) instead of relying on
jpa.hibernate.ddl-auto, and ensure the default application.yml "create-drop" is
not used in prod environments.

In `@docker-compose.yml`:
- Around line 74-75: Remove the unused DATABASE_HOST and DATABASE_PORT
environment variables from the PostPayment service in docker-compose.yml: locate
the PostPayment service section and delete the lines setting
DATABASE_HOST=${DATABASE_HOST} and DATABASE_PORT=${DATABASE_PORT:-3306} so the
service no longer receives DB-related env vars; ensure no other PostPayment env
blocks reference DATABASE_HOST or DATABASE_PORT and commit the change.
- Around line 57-58: The docker-compose env defaults use the host-exposed port
3307 causing payment-server to connect to the wrong port inside the Docker
network; update the PAYMENT_DB_PORT default to 3306 so service-to-service
communication targets the container port used by mysql-payment and no longer
overrides the application.yml default (change
PAYMENT_DB_PORT=${PAYMENT_DB_PORT:-3307} to use 3306).

In `@PostPayment/src/main/java/personal/yejin/NotifyService.java`:
- Around line 26-29: Update the onTimeout handler in NotifyService so the log
message accurately reflects a timeout: in the emitter.onTimeout lambda (the
block that currently calls log.info("SSE emitter 완료됨. storeId={}", storeId) and
emitters.remove(storeId)), change the log text to indicate "타임아웃됨" (e.g., "SSE
emitter 타임아웃됨. storeId={}") while keeping the same logger, storeId parameter,
and the remove call on emitters.
- Line 34: The subscribe method in NotifyService creates and configures an
SseEmitter but returns null; change it to return the created SseEmitter instance
(the emitter variable) so the SSE connection is handed back to the client —
mirror the return pattern used in NotificationService (return the configured
SseEmitter from subscribe), ensuring any exception handling still results in
returning the emitter or propagating the error as appropriate.

In `@scripts/setup-app.sh`:
- Around line 27-31: The script uses the wrong branch name; update the git
operations in scripts/setup-app.sh to use the develop branch instead of
feat/infra: change the git pull command that currently references feat/infra to
pull origin develop and change the git clone -b argument from feat/infra to
develop (the block that checks if [ -d "$REPO_DIR" ] and the else clone branch).
Also make the same branch change in setup-db.sh so both scripts use develop
consistently.

In `@scripts/setup-db.sh`:
- Around line 27-31: The script currently hardcodes the branch name "feat/infra"
when cloning and pulling; change it to use a branch variable (e.g., BRANCH or
GIT_BRANCH) with a sensible default (develop) and reference that variable in the
git commands: set BRANCH="${BRANCH:-develop}" near the top, then replace both
occurrences of "feat/infra" with "$BRANCH" in the git pull and git clone
commands so REPO_DIR operations use the intended branch from the environment or
default to develop.

---

Outside diff comments:
In `@common-event/src/main/java/personal/yejin/PaymentResultEvent.java`:
- Around line 8-18: PaymentResultEvent's canonical constructor signature changed
due to adding storeId and renaming finalPrice to amount; update the
PaymentEventHandler event creation (in class PaymentEventHandler) to pass the
new storeId argument immediately after userId() (pull storeId from the source
event) and replace finalPrice() with amount, ensuring the
PaymentResultEvent(...) call supplies all 10 fields required by the record.

In `@docker-compose.yml`:
- Around line 16-22: The redis service is declared in the wrong compose and
references a missing top-level volume; move the service block named "redis"
(image: redis:7, container_name: redis-delivery, ports, volumes) out of the App
compose and into the DB compose, and add a top-level volumes entry for
"redis-data" in that DB compose (so the service uses redis-data:/data and
docker-compose has a matching volumes: redis-data: declaration). Ensure you
remove the redis service from the App compose after moving it.

In `@PostPayment/src/main/java/personal/yejin/PostPaymentListener.java`:
- Around line 16-22: The Kafka listener consumePaymentResultAndNotifyToStore in
PostPaymentListener is empty so no store notification is sent; implement it to
build a store notification from the incoming PaymentResultEvent (validate event
!= null, extract order/store ids and status), then send the notification either
by calling an injected NotifyService (e.g., notifyService.notifyStore(...)) or
by using the existing kafkaTemplate.send(...) to the appropriate store-notify
topic; ensure you inject NotifyService if not present, handle exceptions with
logging, and acknowledge/skip malformed events.

---

Nitpick comments:
In @.github/workflows/deploy.yml:
- Around line 22-24: Before running the current "docker-compose up -d --build"
step, capture and tag the running images as a fallback and run the build/deploy
in a way that checks for failure; specifically: (1) add a step to record/tag
current service images (use docker inspect/image id and docker tag) so you have
a "previous" tag; (2) replace the direct "docker-compose up -d --build"
invocation with a sequence that builds new images (docker-compose build or
docker build with explicit tags), then runs docker-compose up -d, and checks the
exit code; (3) if the deploy fails, run a rollback step that retags the saved
"previous" images back to their original names and runs docker-compose up -d to
restore the prior state; and (4) keep the existing "docker image prune -f" but
only run it after a successful deploy. Ensure you reference and modify the
existing commands "docker-compose up -d --build" and "docker image prune -f" in
the deploy workflow to implement the tagging, build-with-check, and conditional
rollback logic.

In `@docker-compose.db.yml`:
- Around line 42-48: The redis service (service name "redis", container_name
"redis-delivery") lacks authentication and a healthcheck; update the
docker-compose service to require a password (introduce an env var like
REDIS_PASSWORD and start Redis with --requirepass ${REDIS_PASSWORD} or mount a
redis.conf that sets requirepass) and add a healthcheck that uses redis-cli PING
with AUTH (e.g., run redis-cli -a ${REDIS_PASSWORD} PING and treat "PONG" as
healthy) with appropriate interval/retries/timeouts so orchestration can detect
unhealthy Redis instances.

In `@docker-compose.yml`:
- Around line 100-112: The kafka service in docker-compose.yml lacks a volume,
so broker data (messages and offsets) will be lost on container restart; update
the kafka service definition (service name "kafka") to mount a persistent Docker
volume (e.g., a named volume like kafka-data) into Kafka's data directory inside
the container (referencing KAFKA_LOG_DIR or the container path
/var/lib/kafka/data depending on the image), and add the corresponding top-level
volumes entry to the compose file so the volume is created and persisted across
restarts; also consider doing the same for the "zookeeper" service to persist ZK
data.

In `@nginx/nginx.conf`:
- Around line 10-28: Add explicit proxy timeout directives inside the same
server/location block (where proxy_pass http://api; and the proxy_set_header
lines live): configure proxy_connect_timeout, proxy_send_timeout, and
proxy_read_timeout with sensible values (e.g., seconds) to avoid hanging
upstream connections, and optionally set proxy_next_upstream and
proxy_cache_lock_timeout if you need failover/backpressure behavior; place these
directives alongside the existing proxy_set_header lines in the location / block
so they apply to requests routed to the api upstream.

In `@PostPayment/src/main/java/personal/yejin/NotifyService.java`:
- Around line 17-35: subscribe currently creates and stores an SseEmitter but
returns null and lacks error handling, initial keep-alive event, and a public
send method; update subscribe (in NotifyService.subscribe) to return the created
emitter, add emitter.onError(throwable -> { log.error(..., throwable);
emitters.remove(storeId); }) and ensure the onTimeout/onCompletion log messages
are distinct, send an initial dummy/event after storing the emitter (e.g.,
emitter.send(...) or SseEmitter.event()) to avoid 503s, and add a public method
like notifyStore(Long storeId, Object payload) (or sendNotification/sendEvent)
that looks up emitters.get(storeId) and calls emitter.send(payload) inside
try/catch removing the emitter on failure so PostPaymentListener can call it.

In `@scripts/setup-app.sh`:
- Around line 36-37: The two interactive reads for DB_IP and MYSQL_PW (the read
commands that populate DB_IP and MYSQL_PW) should use the -r flag to prevent
backslash escaping; update the read invocations (the lines that currently call
read -p "DB Server Private IP (EC2 `#2` IP): " DB_IP and read -sp "MySQL Password:
" MYSQL_PW) to include -r (e.g., read -r -p ... and read -r -sp ...) so input is
read raw and backslashes are preserved.
- Line 17: The usermod invocation uses an unquoted shell variable which can
suffer word-splitting; update the sudo usermod -aG docker $USER call to quote
the variable so it reads the same but uses "$USER" (i.e., change the unquoted
$USER to a quoted "$USER" in the usermod command) to match setup-db.sh and
prevent word-splitting issues.

In `@scripts/setup-db.sh`:
- Line 36: 현재 스크립트에서 read -sp "MySQL Password Config: " MYSQL_PW을 사용하고 있어 백슬래시가
포함된 비밀번호가 의도치 않게 해석될 수 있으니 read에 -r 옵션을 추가해 raw 모드로 읽도록 변경하십시오; 즉 스크립트의 해당 read
호출(문자열 "MySQL Password Config: "과 변수 MYSQL_PW을 사용하는 부분)을 찾아 -r을 포함하도록 수정하면 됩니다.
- Line 17: The usermod invocation uses an unquoted shell variable which can
cause word-splitting or glob expansion; update the command that calls usermod
(the line invoking sudo usermod -aG docker $USER) to quote the variable so it
becomes sudo usermod -aG docker "$USER", ensuring $USER is treated as a single
literal argument.
🪄 Autofix (Beta)

❌ Autofix failed (check again to retry)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 117a27fd-1452-49b0-8ad1-cc2b2a444cd0

📥 Commits

Reviewing files that changed from the base of the PR and between 27b06a8 and 56b5509.

📒 Files selected for processing (13)
  • .github/workflows/deploy.yml
  • Payment/src/main/resources/application-prod.yml
  • PostPayment/src/main/java/personal/yejin/NotifyService.java
  • PostPayment/src/main/java/personal/yejin/PostPaymentListener.java
  • api-server/src/main/java/personal/yejin/foodDelivery/domain/notification/service/NotificationService.java
  • api-server/src/main/resources/application-prod.yml
  • common-event/src/main/java/personal/yejin/PaymentResultEvent.java
  • docker-compose.db.yml
  • docker-compose.yml
  • nginx/nginx.conf
  • scripts/setup-app.sh
  • scripts/setup-db.sh
  • status-server/src/main/resources/application-prod.yml

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
dPwls0125 and others added 5 commits April 8, 2026 16:52
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
- payment-server의 DB 내부 통신 포트를 3306으로 변경
- postpayment-server의 미사용 DATABASE_HOST, DATABASE_PORT 환경 변수 제거
- PaymentResultEvent 생성자 인자에 storeId 추가 및 finalPrice -> amount 로 인자 수정
- docker-compose.yml에 중복 등록되어 있던 redis 설정 제거
@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Apr 8, 2026

Note

Autofix is a beta feature. Expect some limitations and changes as we gather feedback and continue to improve it.

Autofix skipped. No unresolved CodeRabbit review comments with fix instructions found.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (2)
scripts/setup-db.sh (1)

17-17: $USER 변수를 따옴표로 감싸세요.

공백이나 특수문자가 포함된 사용자 이름에서 예기치 않은 동작을 방지하기 위해 변수를 따옴표로 감싸는 것이 좋습니다.

♻️ 수정 제안
-    sudo usermod -aG docker $USER
+    sudo usermod -aG docker "$USER"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/setup-db.sh` at line 17, 명령문 sudo usermod -aG docker $USER에서 환경변수
확장으로 인한 공백/특수문자 문제를 방지하기 위해 $USER를 따옴표로 감싸세요; 즉 스크립트의 해당 usermod 호출(문자열 "sudo
usermod -aG docker $USER")을 찾아 $USER를 큰따옴표로 감싼 형태로 수정하면 됩니다.
docker-compose.yml (1)

62-66: postpayment-server에서 사용되지 않는 환경 변수 제거

PostPayment는 Kafka 컨슈머 기반 서비스로 데이터베이스를 사용하지 않습니다. MYSQL_USERNAMEMYSQL_PASSWORD 환경 변수가 주입되고 있으나 PostPayment 코드에서 실제로 참조되지 않으므로 제거하세요.

♻️ 미사용 환경 변수 제거
     environment:
       - SPRING_PROFILES_ACTIVE=${SPRING_PROFILE:-local}
-      - MYSQL_USERNAME=${MYSQL_USERNAME}
-      - MYSQL_PASSWORD=${MYSQL_PASSWORD}
       - SPRING_KAFKA_BOOTSTRAP_SERVERS=kafka:29092
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docker-compose.yml` around lines 62 - 66, The docker-compose environment
block for the postpayment-server injects unused DB credentials; remove the two
environment entries MYSQL_USERNAME and MYSQL_PASSWORD from the service
definition so only relevant vars remain (e.g., SPRING_PROFILES_ACTIVE and
SPRING_KAFKA_BOOTSTRAP_SERVERS). Locate the environment list in
docker-compose.yml and delete the lines containing MYSQL_USERNAME and
MYSQL_PASSWORD to prevent injecting unused secrets.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@docker-compose.yml`:
- Around line 78-81: status-server의 docker-compose 환경 변수에서 불필요한 MySQL 설정이 포함되어
있습니다; docker-compose.yml의 status-server service 환경(env) 블록에서 MYSQL_USERNAME 및
MYSQL_PASSWORD 항목을 삭제하고 SPRING_PROFILES_ACTIVE 및 REDIS_HOST 는 그대로 유지하도록 수정하세요.
구체적으로 현재 diff에 보이는 "- MYSQL_USERNAME=${MYSQL_USERNAME}" 및 "-
MYSQL_PASSWORD=${MYSQL_PASSWORD}" 항목을 제거하면 됩니다.

In
`@Payment/src/main/java/personal/yejin/handler/PaymentCompletedInternalEvent.java`:
- Around line 15-18: Update all call sites to match the new
PaymentCompletedInternalEvent record signature: include the new Long storeId as
the first argument and replace the old finalPrice usage with the renamed amount
int in the correct parameter order (storeId, paymentStatus, timestamp, amount).
Specifically, fix instantiations in PaymentRequestListener and in
PaymentEventHandlerUnitTest (all test constructors/assertions) so they supply
storeId and the amount value in place of finalPrice; ensure imports/variable
names used at those call sites are adjusted accordingly.

In `@scripts/setup-db.sh`:
- Around line 27-33: The script contains an extra closing "fi" that causes a
syntax error; locate the if block starting with if [ -d "$REPO_DIR" ] and remove
the redundant trailing fi after the else branch closure so the block is properly
terminated by the single fi that closes the if [ -d "$REPO_DIR" ] conditional
(ensure the remaining fi corresponds to that if and no unmatched fi remain).

---

Nitpick comments:
In `@docker-compose.yml`:
- Around line 62-66: The docker-compose environment block for the
postpayment-server injects unused DB credentials; remove the two environment
entries MYSQL_USERNAME and MYSQL_PASSWORD from the service definition so only
relevant vars remain (e.g., SPRING_PROFILES_ACTIVE and
SPRING_KAFKA_BOOTSTRAP_SERVERS). Locate the environment list in
docker-compose.yml and delete the lines containing MYSQL_USERNAME and
MYSQL_PASSWORD to prevent injecting unused secrets.

In `@scripts/setup-db.sh`:
- Line 17: 명령문 sudo usermod -aG docker $USER에서 환경변수 확장으로 인한 공백/특수문자 문제를 방지하기 위해
$USER를 따옴표로 감싸세요; 즉 스크립트의 해당 usermod 호출(문자열 "sudo usermod -aG docker $USER")을 찾아
$USER를 큰따옴표로 감싼 형태로 수정하면 됩니다.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: ab35dccb-57ba-4aab-8779-cfad0bdd50f4

📥 Commits

Reviewing files that changed from the base of the PR and between 6697ea7 and 208ba8e.

📒 Files selected for processing (5)
  • Payment/src/main/java/personal/yejin/handler/PaymentCompletedInternalEvent.java
  • Payment/src/main/java/personal/yejin/handler/PaymentEventHandler.java
  • PostPayment/src/main/java/personal/yejin/NotifyService.java
  • docker-compose.yml
  • scripts/setup-db.sh
🚧 Files skipped from review as they are similar to previous changes (1)
  • PostPayment/src/main/java/personal/yejin/NotifyService.java

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant