Skip to content

Comments

[DEPLOY]: develop->main 머지 시 자동 배포 및 수동 배포 지원#51

Merged
arlen02-01 merged 3 commits intodevelopfrom
OT-90-feature/cicd
Feb 21, 2026
Merged

[DEPLOY]: develop->main 머지 시 자동 배포 및 수동 배포 지원#51
arlen02-01 merged 3 commits intodevelopfrom
OT-90-feature/cicd

Conversation

@arlen02-01
Copy link
Contributor

@arlen02-01 arlen02-01 commented Feb 21, 2026

📝 작업 내용

ci/cd 설정 적용 및 깃 시크릿 작성

📷 스크린샷

☑️ 체크 리스트

체크 리스트를 확인해주세요

  • 이슈는 등록했나요?
  • 라벨은 등록했나요?

#️⃣ 연관된 이슈

OT-90

💬 리뷰 요구사항

리뷰어가 특별히 봐주었으면 하는 부분이 있다면 작성해주세요

ex) 예외 처리를 이렇게 해도 괜찮을까요? / ~~부분 주의 깊게 봐주세요

Summary by CodeRabbit

변경 사항

  • Chores
    • 자동화된 배포 워크플로우를 추가했습니다. 여러 서비스(사용자 API, 관리자 API, 트랜스코더)의 컨테이너 이미지를 빌드해 레지스트리에 푸시하고, 원격 EC2 인스턴스로 배포하여 서비스를 자동으로 재시작합니다. 서비스별 환경 변수와 포트 매핑을 지원하며, 배포 과정은 원격 명령 실행을 통해 상태를 확인하고 오류를 보고합니다.

@arlen02-01 arlen02-01 self-assigned this Feb 21, 2026
@arlen02-01 arlen02-01 added the deploy 프로젝트 배포 관련 label Feb 21, 2026
@coderabbitai
Copy link

coderabbitai bot commented Feb 21, 2026

No actionable comments were generated in the recent review. 🎉


📝 Walkthrough

Walkthrough

EC2 인스턴스에 Docker 애플리케이션을 배포하는 신규 GitHub Actions 워크플로우를 추가합니다. 워크플로우는 세 서비스 이미지를 ECR로 빌드·푸시하고, AWS SSM을 통해 EC2에서 환경 페이로드를 설치하고 컨테이너를 풀·재시작합니다. (RDS 자격증명 사용)

Changes

Cohort / File(s) Summary
GitHub Actions 워크플로우
.github/workflows/deploy-ec2-docker.yml
새 워크플로우 추가: build-and-push (matrix로 api-user, api-admin, transcoder 이미지 빌드·ECR 푸시, 태그 관리) 및 deploy (SSM 통해 EC2에 env 페이로드 전송, ECR 로그인·이미지 풀, 기존 컨테이너 제거 후 재실행, 포트 매핑 옵션, RDS 자격검증) 포함.

Sequence Diagram(s)

sequenceDiagram
    autonumber
    participant GH as "GitHub Actions"
    participant ECR as "ECR"
    participant SSM as "AWS SSM (EC2)"
    participant EC2 as "EC2 Docker Host"
    participant RDS as "RDS (env)"

    GH->>ECR: 빌드(매트릭스) 및 푸시 (버전 태그, latest)
    ECR-->>GH: 이미지 URI 반환
    GH->>SSM: 서비스별 배포 스크립트 + base64 환경 페이로드 전송
    SSM->>EC2: 명령 실행 (env 파일 기록, docker login, pull)
    EC2->>ECR: 이미지 풀
    EC2->>EC2: 기존 컨테이너 중지/제거 및 새 컨테이너 실행 (포트 매핑)
    EC2->>RDS: 런타임 시 제공된 DB 자격으로 연결
    SSM-->>GH: 명령 상태 폴링 및 결과 보고
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Poem

🐇 깡충, 깡충, 이미지가 올라가네,
시크릿은 베이스64에 숨겼지요.
ECR이 손짓하면 SSM 문 열고,
EC2 위에서 컨테이너가 춤추네.
배포 완료, 당근 한입 축하해요. 🥕

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed PR 제목이 변경 사항의 주요 내용을 명확하게 설명하고 있습니다. EC2 Docker 배포 워크플로우 추가와 자동/수동 배포 지원이라는 핵심 변경 사항을 정확히 반영하고 있습니다.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch OT-90-feature/cicd

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (3)
.github/workflows/deploy-ec2-docker.yml (3)

113-116: 환경변수 페이로드 구성 시 개행 문자 처리에 주의가 필요합니다.

env_payload에 이미 선행/후행 개행이 포함되어 있으면 .env 파일에 빈 줄이 추가될 수 있습니다. 대부분의 경우 문제가 되지 않지만, 일부 파서에서 이슈가 발생할 수 있습니다.

♻️ 개행 정리 추가
-           full_env_payload=$(printf 'SPRING_DATASOURCE_URL=jdbc:mysql://%s:3306/%s\nSPRING_DATASOURCE_USERNAME=%s\nSPRING_DATASOURCE_PASSWORD=%s\n%s' "${RDS_ENDPOINT}" "${DB_NAME}" "${DB_USERNAME}" "${DB_PASSWORD}" "${env_payload}")
+           # Trim leading/trailing whitespace from env_payload
+           trimmed_env_payload=$(printf '%s' "${env_payload}" | sed '/^$/d')
+           full_env_payload=$(printf 'SPRING_DATASOURCE_URL=jdbc:mysql://%s:3306/%s\nSPRING_DATASOURCE_USERNAME=%s\nSPRING_DATASOURCE_PASSWORD=%s\n%s' "${RDS_ENDPOINT}" "${DB_NAME}" "${DB_USERNAME}" "${DB_PASSWORD}" "${trimmed_env_payload}")
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/deploy-ec2-docker.yml around lines 113 - 116, The
full_env_payload construction can include unintended leading/trailing newlines
from the env_payload variable causing blank lines in the generated .env; before
creating full_env_payload (and env_payload_b64) strip or normalize
leading/trailing newlines from env_payload (e.g., trim with shell parameter
expansion or a small sed/awk/printf normalization) so you concatenate cleanly
into full_env_payload and then base64-encode that cleaned value into
env_payload_b64; update references around the full_env_payload and
env_payload_b64 assignments to use the trimmed env_payload.

134-136: 컨테이너 교체 시 잠깐의 다운타임이 발생합니다.

docker rm -f로 기존 컨테이너를 삭제한 후 새 컨테이너를 시작하는 방식은 짧은 다운타임을 유발합니다. 프로덕션 환경에서 무중단 배포가 필요한 경우 개선이 필요합니다.

♻️ 무중단 배포 개선 방안

옵션 1: 새 컨테이너 먼저 시작 후 전환

# 새 컨테이너를 다른 이름으로 먼저 시작
sudo docker run -d --name ${container_name}-new ...
# 헬스체크 후 기존 컨테이너 중지
sudo docker rm -f ${container_name} || true
sudo docker rename ${container_name}-new ${container_name}

옵션 2: Docker Compose나 ECS 전환 고려
단일 인스턴스 Docker 대신 ECS, Docker Swarm 등을 사용하면 롤링 업데이트를 자동으로 처리할 수 있습니다.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/deploy-ec2-docker.yml around lines 134 - 136, The current
deploy step removes the running container with "sudo docker rm -f
${container_name}" which causes a short downtime; change the sequence to start
the new image under a temporary name (use ${container_name}-new or similar)
using ${run_cmd} adjusted to that name, wait/perform a health check on the new
container, then stop/remove the old container and rename the new one to
${container_name}; alternatively replace the single-instance flow with a managed
rolling update (Docker Compose, ECS) if you need platform-level zero-downtime
deployments.

19-19: 조건문이 두 job에서 중복되고 있습니다.

라인 19와 70에서 동일한 긴 조건문이 반복됩니다. workflow_dispatch 또는 pull_request 트리거 레벨에서 필터링하는 것을 고려해보세요.

♻️ 개선 제안

pull_request 트리거에 브랜치 필터를 추가하면 조건문을 단순화할 수 있습니다:

 on:
   workflow_dispatch:
     inputs:
       image_tag:
         description: "Docker image tag to deploy (default: commit SHA)"
         required: false
         type: string
   pull_request:
     types:
       - closed
+    branches:
+      - main

 jobs:
   build-and-push:
-    if: ${{ github.event_name == 'workflow_dispatch' || (github.event_name == 'pull_request' && github.event.pull_request.merged == true && github.event.pull_request.base.ref == 'main' && github.event.pull_request.head.ref == 'develop') }}
+    if: ${{ github.event_name == 'workflow_dispatch' || (github.event.pull_request.merged == true && github.event.pull_request.head.ref == 'develop') }}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/deploy-ec2-docker.yml at line 19, The long conditional
expression used twice should be moved into trigger-level filtering to avoid
duplication: update the workflow "on:" block to include workflow_dispatch and a
pull_request trigger with types: [closed] and branches: [main] (so only
closed/merged PRs targeting main fire), then remove the duplicated full
expression (the lengthy "${{ github.event_name == 'workflow_dispatch' ||
(github.event_name == 'pull_request' && github.event.pull_request.merged == true
&& github.event.pull_request.base.ref == 'main' &&
github.event.pull_request.head.ref == 'develop') }}" ifs) from the two jobs and
replace them with a much simpler job-level check (e.g., if: ${{
github.event_name == 'workflow_dispatch' || github.event.pull_request.head.ref
== 'develop' }}), keeping the remaining merged/head checks only if absolutely
necessary.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.github/workflows/deploy-ec2-docker.yml:
- Line 1: The file ".github/workflows/deploy-ec2-docker.yml" contains a leading
BOM character (U+FEFF) at the start of the first line causing YAML parsers to
fail; open that workflow file and remove the BOM by re-saving it as UTF-8
without BOM or by stripping the first BOM byte so the first line begins with
"name: Deploy Docker Apps To EC2" (no invisible characters) and commit the
cleaned file.
- Around line 99-102: Extend the secret validation in the deployment script to
also check API_USER_ENV, API_ADMIN_ENV, and TRANSCODER_ENV; update the existing
conditional that currently validates RDS_ENDPOINT, DB_USERNAME, DB_PASSWORD to
include these three environment variables (or add explicit checks for each) and
emit a clear stderr message naming any missing secrets before exiting with
status 1 so the workflow fails fast when any of API_USER_ENV, API_ADMIN_ENV, or
TRANSCODER_ENV is unset.
- Around line 124-142: The SSM command is sent asynchronously (aws ssm
send-command) and its CommandId is currently discarded (>/dev/null), so the
workflow can't verify deployment success; capture the CommandId output instead
and add a follow-up step that polls aws ssm get-command-invocation (or
list-command-invocations) for that CommandId until the invocation Status is
Success or a timeout is reached, then fail the workflow if the status is
Failed/TimedOut/Cancelled; ensure you stop redirecting CommandId to /dev/null
and use the same container_name/image_uri context so logs from the remote
invocation can be retrieved for debugging.

---

Nitpick comments:
In @.github/workflows/deploy-ec2-docker.yml:
- Around line 113-116: The full_env_payload construction can include unintended
leading/trailing newlines from the env_payload variable causing blank lines in
the generated .env; before creating full_env_payload (and env_payload_b64) strip
or normalize leading/trailing newlines from env_payload (e.g., trim with shell
parameter expansion or a small sed/awk/printf normalization) so you concatenate
cleanly into full_env_payload and then base64-encode that cleaned value into
env_payload_b64; update references around the full_env_payload and
env_payload_b64 assignments to use the trimmed env_payload.
- Around line 134-136: The current deploy step removes the running container
with "sudo docker rm -f ${container_name}" which causes a short downtime; change
the sequence to start the new image under a temporary name (use
${container_name}-new or similar) using ${run_cmd} adjusted to that name,
wait/perform a health check on the new container, then stop/remove the old
container and rename the new one to ${container_name}; alternatively replace the
single-instance flow with a managed rolling update (Docker Compose, ECS) if you
need platform-level zero-downtime deployments.
- Line 19: The long conditional expression used twice should be moved into
trigger-level filtering to avoid duplication: update the workflow "on:" block to
include workflow_dispatch and a pull_request trigger with types: [closed] and
branches: [main] (so only closed/merged PRs targeting main fire), then remove
the duplicated full expression (the lengthy "${{ github.event_name ==
'workflow_dispatch' || (github.event_name == 'pull_request' &&
github.event.pull_request.merged == true && github.event.pull_request.base.ref
== 'main' && github.event.pull_request.head.ref == 'develop') }}" ifs) from the
two jobs and replace them with a much simpler job-level check (e.g., if: ${{
github.event_name == 'workflow_dispatch' || github.event.pull_request.head.ref
== 'develop' }}), keeping the remaining merged/head checks only if absolutely
necessary.

Comment on lines +99 to +102
if [ -z "${RDS_ENDPOINT}" ] || [ -z "${DB_USERNAME}" ] || [ -z "${DB_PASSWORD}" ]; then
echo "RDS_ENDPOINT, DB_USERNAME, DB_PASSWORD secrets are required" >&2
exit 1
fi
Copy link

@coderabbitai coderabbitai bot Feb 21, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

서비스별 환경변수 시크릿 검증이 누락되었습니다.

RDS_ENDPOINT, DB_USERNAME, DB_PASSWORD는 검증하지만 API_USER_ENV, API_ADMIN_ENV, TRANSCODER_ENV 시크릿은 검증하지 않습니다. 이 값들이 비어있으면 컨테이너가 예상대로 동작하지 않을 수 있습니다.

🛡️ 검증 추가 제안
          if [ -z "${RDS_ENDPOINT}" ] || [ -z "${DB_USERNAME}" ] || [ -z "${DB_PASSWORD}" ]; then
            echo "RDS_ENDPOINT, DB_USERNAME, DB_PASSWORD secrets are required" >&2
            exit 1
          fi
+
+         if [ -z "${API_USER_ENV}" ] || [ -z "${API_ADMIN_ENV}" ] || [ -z "${TRANSCODER_ENV}" ]; then
+           echo "Warning: One or more service-specific env vars are empty" >&2
+         fi
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if [ -z "${RDS_ENDPOINT}" ] || [ -z "${DB_USERNAME}" ] || [ -z "${DB_PASSWORD}" ]; then
echo "RDS_ENDPOINT, DB_USERNAME, DB_PASSWORD secrets are required" >&2
exit 1
fi
if [ -z "${RDS_ENDPOINT}" ] || [ -z "${DB_USERNAME}" ] || [ -z "${DB_PASSWORD}" ]; then
echo "RDS_ENDPOINT, DB_USERNAME, DB_PASSWORD secrets are required" >&2
exit 1
fi
if [ -z "${API_USER_ENV}" ] || [ -z "${API_ADMIN_ENV}" ] || [ -z "${TRANSCODER_ENV}" ]; then
echo "Warning: One or more service-specific env vars are empty" >&2
fi
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/deploy-ec2-docker.yml around lines 99 - 102, Extend the
secret validation in the deployment script to also check API_USER_ENV,
API_ADMIN_ENV, and TRANSCODER_ENV; update the existing conditional that
currently validates RDS_ENDPOINT, DB_USERNAME, DB_PASSWORD to include these
three environment variables (or add explicit checks for each) and emit a clear
stderr message naming any missing secrets before exiting with status 1 so the
workflow fails fast when any of API_USER_ENV, API_ADMIN_ENV, or TRANSCODER_ENV
is unset.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

이건 추후 aws 시크릿 파라미터로 빼서 관리 가능으로 대체할 예정입니다.
현재는 깃허브 시크릿 키 설정해서 하는데 관리가 어렵습니다.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@arlen02-01, AWS Secrets Manager로 마이그레이션하는 계획 잘 이해했습니다! 말씀하신 것처럼 AWS Secrets Manager를 사용하면 중앙 집중식 관리가 가능하고 자동 로테이션, 세밀한 액세스 제어 등의 이점이 있어 GitHub Secrets보다 훨씬 관리하기 용이합니다.

이 마이그레이션 작업을 추적하기 위한 이슈를 생성해 드릴까요?

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.github/workflows/deploy-ec2-docker.yml:
- Around line 104-190: The deploy_service function currently builds
env_payload_b64 (containing DB credentials) and embeds it into the aws ssm
send-command parameters, which records sensitive data in CloudTrail/SSM history;
instead, stop embedding secrets in the SSM API call and store DB credentials in
AWS Secrets Manager or SSM Parameter Store (SecureString), give the EC2 instance
IAM permissions to read that secret, and modify deploy_service so the SSM
command no longer includes env_payload_b64 or the echo/base64 decode step but
runs a secure fetch on the instance (e.g., aws secretsmanager get-secret-value
or aws ssm get-parameter --with-decryption) to write /etc/oplust/${env_file}
with proper permissions before running docker; update any related
variables/commands (env_payload_b64, full_env_payload, the send-command commands
array entries that reference base64/echo, and the run_cmd flow) and ensure IAM
role/policy and secret name are used instead of raw credentials.

---

Duplicate comments:
In @.github/workflows/deploy-ec2-docker.yml:
- Around line 69-103: The deploy step's run script currently validates
RDS_ENDPOINT, DB_USERNAME, and DB_PASSWORD but does not validate API_USER_ENV,
API_ADMIN_ENV, or TRANSCODER_ENV, so add the same required-secret checks for
those three environment variables inside the run block (where IMAGE_TAG is set
and the RDS checks occur): test each of API_USER_ENV, API_ADMIN_ENV,
TRANSCODER_ENV for emptiness and if any are empty print a clear error (e.g.,
"API_USER_ENV, API_ADMIN_ENV, TRANSCODER_ENV secrets are required") to stderr
and exit 1 to stop deployment.
- Around line 1-16: The file starts with a BOM (U+FEFF) before the "name: Deploy
Docker Apps To EC2" YAML key which can break parsers; remove the invisible BOM
character at the very start of the file so the first character is the 'n' in
"name", re-save the file as UTF-8 without BOM, and re-run the workflow
lint/parse to confirm the "workflow_dispatch" and other keys parse correctly.

Comment on lines +104 to +190
deploy_service() {
local target_tag="$1"
local image_uri="$2"
local container_name="$3"
local env_file="$4"
local port="$5"
local env_payload="$6"

local instance_id
instance_id=$(aws ec2 describe-instances \
--region "$AWS_REGION" \
--filters "Name=tag:Name,Values=${target_tag}" "Name=instance-state-name,Values=running" \
--query "Reservations[0].Instances[0].InstanceId" \
--output text)

if [ -z "$instance_id" ] || [ "$instance_id" = "None" ]; then
echo "No running instance found for tag: ${target_tag}" >&2
exit 1
fi

local full_env_payload
full_env_payload=$(printf 'SPRING_DATASOURCE_URL=jdbc:mysql://%s:3306/%s\nSPRING_DATASOURCE_USERNAME=%s\nSPRING_DATASOURCE_PASSWORD=%s\n%s' "${RDS_ENDPOINT}" "${DB_NAME}" "${DB_USERNAME}" "${DB_PASSWORD}" "${env_payload}")

local env_payload_b64
env_payload_b64="$(printf '%s' "$full_env_payload" | base64 -w0)"

local run_cmd
if [ -n "$port" ]; then
run_cmd="sudo docker run -d --name ${container_name} --restart unless-stopped -p ${port}:${port} --env-file ${env_file} ${image_uri}"
else
run_cmd="sudo docker run -d --name ${container_name} --restart unless-stopped --env-file ${env_file} ${image_uri}"
fi

local cmd_id
cmd_id=$(aws ssm send-command \
--region "$AWS_REGION" \
--instance-ids "$instance_id" \
--document-name "AWS-RunShellScript" \
--comment "Deploy ${container_name}:${IMAGE_TAG}" \
--parameters commands="[
\"set -e\",
\"sudo mkdir -p /etc/oplust\",
\"echo '${env_payload_b64}' | base64 -d | sudo tee ${env_file} >/dev/null\",
\"sudo chmod 600 ${env_file}\",
\"aws ecr get-login-password --region $AWS_REGION | sudo docker login --username AWS --password-stdin $ECR_REGISTRY\",
\"sudo docker pull ${image_uri}\",
\"sudo docker rm -f ${container_name} || true\",
\"${run_cmd}\"
]" \
--query 'Command.CommandId' \
--output text)

echo "[$container_name] command id: $cmd_id (instance: $instance_id)"

local status
for _ in $(seq 1 120); do
status=$(aws ssm get-command-invocation \
--region "$AWS_REGION" \
--command-id "$cmd_id" \
--instance-id "$instance_id" \
--query 'Status' \
--output text 2>/dev/null || true)

case "$status" in
Success)
echo "[$container_name] deployment success"
return 0
;;
Failed|Cancelled|TimedOut)
echo "[$container_name] deployment failed with status: $status" >&2
aws ssm get-command-invocation --region "$AWS_REGION" --command-id "$cmd_id" --instance-id "$instance_id" --query '{StdOut:StandardOutputContent,StdErr:StandardErrorContent}' --output json || true
exit 1
;;
Pending|InProgress|Delayed|"")
sleep 5
;;
*)
echo "[$container_name] unexpected status: $status" >&2
sleep 5
;;
esac
done

echo "[$container_name] deployment timed out waiting for SSM command completion" >&2
aws ssm get-command-invocation --region "$AWS_REGION" --command-id "$cmd_id" --instance-id "$instance_id" --query '{StdOut:StandardOutputContent,StdErr:StandardErrorContent}' --output json || true
exit 1
}
Copy link

@coderabbitai coderabbitai bot Feb 21, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🌐 Web query:

AWS SSM SendCommand parameters CloudTrail logging storage

💡 Result:

  • Does CloudTrail log SendCommand parameters? Yes. AWS Systems Manager SendCommand is a CloudTrail management event (control plane), and CloudTrail records the API call including the requestParameters object. (docs.aws.amazon.com)
  • Which SendCommand fields can appear in requestParameters? Whatever you send in the API request—commonly DocumentName, InstanceIds/Targets, Parameters (for example the document’s commands parameter), Comment, OutputS3BucketName, OutputS3KeyPrefix, CloudWatchOutputConfig, etc. (See the API’s request syntax/parameters.) (docs.aws.amazon.com)
  • Any size limits / omissions? CloudTrail’s requestParameters field has a maximum size of 100 KB; if it exceeds that size, the field’s contents can be omitted from the event. (docs.aws.amazon.com)

Where are these CloudTrail logs stored?

  • Event history (default): View/search in the CloudTrail console for 90 days (not durable storage). (docs.aws.amazon.com)
  • Trail (durable files): Configure a CloudTrail trail to deliver log files to an S3 bucket you specify (optionally also to CloudWatch Logs). (docs.aws.amazon.com)
  • CloudTrail Lake (durable, queryable): Store events in a CloudTrail Lake event data store (instead of/in addition to S3 trails). (docs.aws.amazon.com)

Important related point (often confused):

  • CloudTrail records the API call metadata + request parameters; command output is stored via Run Command settings (for example S3 and/or CloudWatch Logs output), not “in CloudTrail.” (docs.aws.amazon.com)

Citations:


🌐 Web query:

AWS SSM command history parameter values exposed security

💡 Result:

Yes—SSM Run Command parameter values can be exposed if you pass secrets as plaintext parameters.

  • Where they get exposed: When you run a command, the request (including parameter values you supply) is part of Systems Manager API activity, which is logged by AWS CloudTrail; anyone with access to the CloudTrail log location can potentially see those plaintext values. AWS explicitly warns not to include secrets in plaintext in Run Command requests. [1]
  • Command history deletion: After execution, details are stored in Run Command “Command history” and you can’t delete that history from the console. [2]
  • Retention: The Run Command console shows command history for up to 30 days. [1]

How to avoid exposing secrets

  • Don’t pass secrets as Run Command parameters. Put secrets in SSM Parameter Store as SecureString and retrieve them at execution time on the instance (AWS provides walkthroughs for this pattern). [1][3]
  • Lock down access to:
    • CloudTrail logs (S3 bucket / CloudTrail Lake) that record SSM API calls [1]
    • Run Command APIs/console views (e.g., who can call/list commands and invocations)
  • If your concern is command injection (different from secret exposure), consider using document features like interpolationType to treat inputs as string literals. [4]

Sources: AWS Systems Manager Run Command docs and warnings [1], deletion limitations for Run Command history [2], SecureString walkthrough [3], SSM document plugin reference (interpolationType) [4].


🌐 Web query:

AWS Secrets Manager vs SSM Parameter Store for EC2 credentials best practices

💡 Result:

Best-practice default for “EC2 credentials”

  1. Don’t store long‑term AWS credentials on EC2 at all. Use an IAM role (instance profile) so the instance gets temporary credentials automatically (“remove/replace/rotate” guidance). [1]
  2. For human access to instances, avoid shared SSH private keys. AWS explicitly recommends EC2 Instance Connect (and, commonly, Systems Manager Session Manager) instead of managing SSH keys yourself. [2]

When to use Secrets Manager vs SSM Parameter Store (for credentials your app must store)

Choose AWS Secrets Manager when:

  • You need managed secret rotation (built-in workflows; rotation can be scheduled frequently). [3]
  • You want purpose-built secrets lifecycle controls (replication, monitoring guidance, secret consumption patterns/agents/extensions). [3]
  • Cost is acceptable for the extra features ($0.40 per secret-month + API calls). [4]

Choose SSM Parameter Store (SecureString) when:

  • The value is configuration or a secret that won’t be rotated automatically (or you’ll build rotation yourself).
  • You want lower cost / simplicity: Standard parameters are no additional charge; Advanced adds per-parameter and API charges. [5]
  • You fit within Parameter Store quotas/feature needs (e.g., parameter tiers, size limits). [6]

Practical recommendations for EC2-based apps

  • AWS-to-AWS auth from EC2 (to S3, DynamoDB, etc.): use IAM role, not stored keys. [1]
  • Database passwords / third-party API keys used by the app on EC2:
    • Use Secrets Manager if you want automatic rotation. [3]
    • Use Parameter Store SecureString if rotation isn’t needed and you want a cheaper config store. [5]
  • SSH access: prefer EC2 Instance Connect rather than distributing/storing SSH keys. [2]

Sources

[1] AWS Well-Architected Framework – Store and use secrets securely (remove/replace/rotate; use IAM roles instead of long-term keys): (docs.aws.amazon.com)
[2] AWS Security Blog – note updated July 2, 2025 recommending EC2 Instance Connect vs managing SSH keys: (aws.amazon.com)
[3] AWS Secrets Manager docs – best practices incl. rotation: (docs.aws.amazon.com)
[4] AWS Secrets Manager pricing: (aws.amazon.com)
[5] AWS Systems Manager pricing – Parameter Store standard vs advanced: (aws.amazon.com)
[6] AWS Systems Manager docs – Parameter Store tiers/limits: (docs.aws.amazon.com)

Citations:


SSM 명령 파라미터에 민감한 정보가 직접 노출됩니다.

env_payload_b64에 DB 크리덴셜이 포함되어 있으며, SendCommand API 호출 시 전달되는 파라미터는 AWS CloudTrail에 기록됩니다. CloudTrail 로그에 접근할 수 있는 권한이 있는 사용자라면 누구나 이 값들을 확인할 수 있습니다. 또한 SSM 명령 실행 기록(Run Command command history)에도 30일 동안 저장되며 콘솔에서 삭제할 수 없습니다. Base64 인코딩은 암호화가 아니므로 보안을 제공하지 않습니다.

해결 방법: DB 크리덴셜은 AWS Secrets Manager 또는 **SSM Parameter Store(SecureString)**에 저장하고, EC2 인스턴스의 IAM 역할을 통해 런타임에 조회하도록 변경해야 합니다.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/deploy-ec2-docker.yml around lines 104 - 190, The
deploy_service function currently builds env_payload_b64 (containing DB
credentials) and embeds it into the aws ssm send-command parameters, which
records sensitive data in CloudTrail/SSM history; instead, stop embedding
secrets in the SSM API call and store DB credentials in AWS Secrets Manager or
SSM Parameter Store (SecureString), give the EC2 instance IAM permissions to
read that secret, and modify deploy_service so the SSM command no longer
includes env_payload_b64 or the echo/base64 decode step but runs a secure fetch
on the instance (e.g., aws secretsmanager get-secret-value or aws ssm
get-parameter --with-decryption) to write /etc/oplust/${env_file} with proper
permissions before running docker; update any related variables/commands
(env_payload_b64, full_env_payload, the send-command commands array entries that
reference base64/echo, and the run_cmd flow) and ensure IAM role/policy and
secret name are used instead of raw credentials.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

이 이슈는 시크릿 키를 깃에서 관리하기때문에 나옵니다.
다만 iam유저가 1개뿐이라 문제없습니다.
+) 추후 aws parameter store로 시크릿 키 관리할 예정입니다.(위 문제 해결됨.)

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@arlen02-01, 이해했습니다. 현재 상황에서는 괜찮다고 판단하셨고, 향후 AWS Parameter Store로 마이그레이션 계획이 있으시군요.

나중에 작업하실 때 참고하실 수 있도록 이 개선 사항을 GitHub 이슈로 등록해 드릴까요? 이 PR과 댓글 링크를 포함해서 생성해 드리겠습니다.

@arlen02-01 arlen02-01 merged commit 48ecf4a into develop Feb 21, 2026
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

deploy 프로젝트 배포 관련

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant