Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
194 changes: 194 additions & 0 deletions .github/workflows/deploy-ec2-docker.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,194 @@
name: Deploy Docker Apps To EC2

on:
workflow_dispatch:
inputs:
image_tag:
description: "Docker image tag to deploy (default: commit SHA)"
required: false
type: string
pull_request:
types:
- closed

env:
AWS_REGION: ap-northeast-2

jobs:
build-and-push:
if: ${{ github.event_name == 'workflow_dispatch' || (github.event_name == 'pull_request' && github.event.pull_request.merged == true && github.event.pull_request.base.ref == 'main' && github.event.pull_request.head.ref == 'develop') }}
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
include:
- service: api-user
ecr_repo: oplust-api-user
- service: api-admin
ecr_repo: oplust-api-admin
- service: transcoder
ecr_repo: oplust-transcoder

steps:
- name: Checkout
uses: actions/checkout@v4

- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v4
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ env.AWS_REGION }}

- name: Login to ECR
uses: aws-actions/amazon-ecr-login@v2

- name: Ensure ECR repository exists
run: |
aws ecr describe-repositories --repository-names "${{ matrix.ecr_repo }}" >/dev/null 2>&1 || \
aws ecr create-repository --repository-name "${{ matrix.ecr_repo }}" >/dev/null

- name: Build and push image
env:
ECR_REGISTRY: ${{ secrets.AWS_ACCOUNT_ID }}.dkr.ecr.${{ env.AWS_REGION }}.amazonaws.com
IMAGE_TAG_INPUT: ${{ github.event.inputs.image_tag }}
run: |
IMAGE_TAG="${IMAGE_TAG_INPUT:-${GITHUB_SHA}}"
IMAGE_URI="${ECR_REGISTRY}/${{ matrix.ecr_repo }}:${IMAGE_TAG}"
IMAGE_URI_LATEST="${ECR_REGISTRY}/${{ matrix.ecr_repo }}:latest"

docker build \
-f "apps/${{ matrix.service }}/Dockerfile" \
-t "${IMAGE_URI}" \
-t "${IMAGE_URI_LATEST}" \
.

docker push "${IMAGE_URI}"
docker push "${IMAGE_URI_LATEST}"

deploy:
if: ${{ github.event_name == 'workflow_dispatch' || (github.event_name == 'pull_request' && github.event.pull_request.merged == true && github.event.pull_request.base.ref == 'main' && github.event.pull_request.head.ref == 'develop') }}
runs-on: ubuntu-latest
needs: build-and-push

steps:
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v4
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ env.AWS_REGION }}

- name: Deploy to EC2 instances via SSM
env:
ECR_REGISTRY: ${{ secrets.AWS_ACCOUNT_ID }}.dkr.ecr.${{ env.AWS_REGION }}.amazonaws.com
IMAGE_TAG_INPUT: ${{ github.event.inputs.image_tag }}
PROJECT_NAME: oplust
DB_NAME: oplust
RDS_ENDPOINT: ${{ secrets.RDS_ENDPOINT }}
DB_USERNAME: ${{ secrets.DB_USERNAME }}
DB_PASSWORD: ${{ secrets.DB_PASSWORD }}
API_USER_ENV: ${{ secrets.API_USER_ENV }}
API_ADMIN_ENV: ${{ secrets.API_ADMIN_ENV }}
TRANSCODER_ENV: ${{ secrets.TRANSCODER_ENV }}
run: |
set -euo pipefail

IMAGE_TAG="${IMAGE_TAG_INPUT:-${GITHUB_SHA}}"

if [ -z "${RDS_ENDPOINT}" ] || [ -z "${DB_USERNAME}" ] || [ -z "${DB_PASSWORD}" ]; then
echo "RDS_ENDPOINT, DB_USERNAME, DB_PASSWORD secrets are required" >&2
exit 1
fi
Comment on lines +99 to +102
Copy link

@coderabbitai coderabbitai bot Feb 21, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

서비스별 환경변수 시크릿 검증이 누락되었습니다.

RDS_ENDPOINT, DB_USERNAME, DB_PASSWORD는 검증하지만 API_USER_ENV, API_ADMIN_ENV, TRANSCODER_ENV 시크릿은 검증하지 않습니다. 이 값들이 비어있으면 컨테이너가 예상대로 동작하지 않을 수 있습니다.

🛡️ 검증 추가 제안
          if [ -z "${RDS_ENDPOINT}" ] || [ -z "${DB_USERNAME}" ] || [ -z "${DB_PASSWORD}" ]; then
            echo "RDS_ENDPOINT, DB_USERNAME, DB_PASSWORD secrets are required" >&2
            exit 1
          fi
+
+         if [ -z "${API_USER_ENV}" ] || [ -z "${API_ADMIN_ENV}" ] || [ -z "${TRANSCODER_ENV}" ]; then
+           echo "Warning: One or more service-specific env vars are empty" >&2
+         fi
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if [ -z "${RDS_ENDPOINT}" ] || [ -z "${DB_USERNAME}" ] || [ -z "${DB_PASSWORD}" ]; then
echo "RDS_ENDPOINT, DB_USERNAME, DB_PASSWORD secrets are required" >&2
exit 1
fi
if [ -z "${RDS_ENDPOINT}" ] || [ -z "${DB_USERNAME}" ] || [ -z "${DB_PASSWORD}" ]; then
echo "RDS_ENDPOINT, DB_USERNAME, DB_PASSWORD secrets are required" >&2
exit 1
fi
if [ -z "${API_USER_ENV}" ] || [ -z "${API_ADMIN_ENV}" ] || [ -z "${TRANSCODER_ENV}" ]; then
echo "Warning: One or more service-specific env vars are empty" >&2
fi
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/deploy-ec2-docker.yml around lines 99 - 102, Extend the
secret validation in the deployment script to also check API_USER_ENV,
API_ADMIN_ENV, and TRANSCODER_ENV; update the existing conditional that
currently validates RDS_ENDPOINT, DB_USERNAME, DB_PASSWORD to include these
three environment variables (or add explicit checks for each) and emit a clear
stderr message naming any missing secrets before exiting with status 1 so the
workflow fails fast when any of API_USER_ENV, API_ADMIN_ENV, or TRANSCODER_ENV
is unset.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

이건 추후 aws 시크릿 파라미터로 빼서 관리 가능으로 대체할 예정입니다.
현재는 깃허브 시크릿 키 설정해서 하는데 관리가 어렵습니다.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@arlen02-01, AWS Secrets Manager로 마이그레이션하는 계획 잘 이해했습니다! 말씀하신 것처럼 AWS Secrets Manager를 사용하면 중앙 집중식 관리가 가능하고 자동 로테이션, 세밀한 액세스 제어 등의 이점이 있어 GitHub Secrets보다 훨씬 관리하기 용이합니다.

이 마이그레이션 작업을 추적하기 위한 이슈를 생성해 드릴까요?


deploy_service() {
local target_tag="$1"
local image_uri="$2"
local container_name="$3"
local env_file="$4"
local port="$5"
local env_payload="$6"

local instance_id
instance_id=$(aws ec2 describe-instances \
--region "$AWS_REGION" \
--filters "Name=tag:Name,Values=${target_tag}" "Name=instance-state-name,Values=running" \
--query "Reservations[0].Instances[0].InstanceId" \
--output text)

if [ -z "$instance_id" ] || [ "$instance_id" = "None" ]; then
echo "No running instance found for tag: ${target_tag}" >&2
exit 1
fi

local full_env_payload
full_env_payload=$(printf 'SPRING_DATASOURCE_URL=jdbc:mysql://%s:3306/%s\nSPRING_DATASOURCE_USERNAME=%s\nSPRING_DATASOURCE_PASSWORD=%s\n%s' "${RDS_ENDPOINT}" "${DB_NAME}" "${DB_USERNAME}" "${DB_PASSWORD}" "${env_payload}")

local env_payload_b64
env_payload_b64="$(printf '%s' "$full_env_payload" | base64 -w0)"

local run_cmd
if [ -n "$port" ]; then
run_cmd="sudo docker run -d --name ${container_name} --restart unless-stopped -p ${port}:${port} --env-file ${env_file} ${image_uri}"
else
run_cmd="sudo docker run -d --name ${container_name} --restart unless-stopped --env-file ${env_file} ${image_uri}"
fi

local cmd_id
cmd_id=$(aws ssm send-command \
--region "$AWS_REGION" \
--instance-ids "$instance_id" \
--document-name "AWS-RunShellScript" \
--comment "Deploy ${container_name}:${IMAGE_TAG}" \
--parameters commands="[
\"set -e\",
\"sudo mkdir -p /etc/oplust\",
\"echo '${env_payload_b64}' | base64 -d | sudo tee ${env_file} >/dev/null\",
\"sudo chmod 600 ${env_file}\",
\"aws ecr get-login-password --region $AWS_REGION | sudo docker login --username AWS --password-stdin $ECR_REGISTRY\",
\"sudo docker pull ${image_uri}\",
\"sudo docker rm -f ${container_name} || true\",
\"${run_cmd}\"
]" \
--query 'Command.CommandId' \
--output text)

echo "[$container_name] command id: $cmd_id (instance: $instance_id)"

local status
for _ in $(seq 1 120); do
status=$(aws ssm get-command-invocation \
--region "$AWS_REGION" \
--command-id "$cmd_id" \
--instance-id "$instance_id" \
--query 'Status' \
--output text 2>/dev/null || true)

case "$status" in
Success)
echo "[$container_name] deployment success"
return 0
;;
Failed|Cancelled|TimedOut)
echo "[$container_name] deployment failed with status: $status" >&2
aws ssm get-command-invocation --region "$AWS_REGION" --command-id "$cmd_id" --instance-id "$instance_id" --query '{StdOut:StandardOutputContent,StdErr:StandardErrorContent}' --output json || true
exit 1
;;
Pending|InProgress|Delayed|"")
sleep 5
;;
*)
echo "[$container_name] unexpected status: $status" >&2
sleep 5
;;
esac
done

echo "[$container_name] deployment timed out waiting for SSM command completion" >&2
aws ssm get-command-invocation --region "$AWS_REGION" --command-id "$cmd_id" --instance-id "$instance_id" --query '{StdOut:StandardOutputContent,StdErr:StandardErrorContent}' --output json || true
exit 1
}
Comment on lines +104 to +190
Copy link

@coderabbitai coderabbitai bot Feb 21, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🌐 Web query:

AWS SSM SendCommand parameters CloudTrail logging storage

💡 Result:

  • Does CloudTrail log SendCommand parameters? Yes. AWS Systems Manager SendCommand is a CloudTrail management event (control plane), and CloudTrail records the API call including the requestParameters object. (docs.aws.amazon.com)
  • Which SendCommand fields can appear in requestParameters? Whatever you send in the API request—commonly DocumentName, InstanceIds/Targets, Parameters (for example the document’s commands parameter), Comment, OutputS3BucketName, OutputS3KeyPrefix, CloudWatchOutputConfig, etc. (See the API’s request syntax/parameters.) (docs.aws.amazon.com)
  • Any size limits / omissions? CloudTrail’s requestParameters field has a maximum size of 100 KB; if it exceeds that size, the field’s contents can be omitted from the event. (docs.aws.amazon.com)

Where are these CloudTrail logs stored?

  • Event history (default): View/search in the CloudTrail console for 90 days (not durable storage). (docs.aws.amazon.com)
  • Trail (durable files): Configure a CloudTrail trail to deliver log files to an S3 bucket you specify (optionally also to CloudWatch Logs). (docs.aws.amazon.com)
  • CloudTrail Lake (durable, queryable): Store events in a CloudTrail Lake event data store (instead of/in addition to S3 trails). (docs.aws.amazon.com)

Important related point (often confused):

  • CloudTrail records the API call metadata + request parameters; command output is stored via Run Command settings (for example S3 and/or CloudWatch Logs output), not “in CloudTrail.” (docs.aws.amazon.com)

Citations:


🌐 Web query:

AWS SSM command history parameter values exposed security

💡 Result:

Yes—SSM Run Command parameter values can be exposed if you pass secrets as plaintext parameters.

  • Where they get exposed: When you run a command, the request (including parameter values you supply) is part of Systems Manager API activity, which is logged by AWS CloudTrail; anyone with access to the CloudTrail log location can potentially see those plaintext values. AWS explicitly warns not to include secrets in plaintext in Run Command requests. [1]
  • Command history deletion: After execution, details are stored in Run Command “Command history” and you can’t delete that history from the console. [2]
  • Retention: The Run Command console shows command history for up to 30 days. [1]

How to avoid exposing secrets

  • Don’t pass secrets as Run Command parameters. Put secrets in SSM Parameter Store as SecureString and retrieve them at execution time on the instance (AWS provides walkthroughs for this pattern). [1][3]
  • Lock down access to:
    • CloudTrail logs (S3 bucket / CloudTrail Lake) that record SSM API calls [1]
    • Run Command APIs/console views (e.g., who can call/list commands and invocations)
  • If your concern is command injection (different from secret exposure), consider using document features like interpolationType to treat inputs as string literals. [4]

Sources: AWS Systems Manager Run Command docs and warnings [1], deletion limitations for Run Command history [2], SecureString walkthrough [3], SSM document plugin reference (interpolationType) [4].


🌐 Web query:

AWS Secrets Manager vs SSM Parameter Store for EC2 credentials best practices

💡 Result:

Best-practice default for “EC2 credentials”

  1. Don’t store long‑term AWS credentials on EC2 at all. Use an IAM role (instance profile) so the instance gets temporary credentials automatically (“remove/replace/rotate” guidance). [1]
  2. For human access to instances, avoid shared SSH private keys. AWS explicitly recommends EC2 Instance Connect (and, commonly, Systems Manager Session Manager) instead of managing SSH keys yourself. [2]

When to use Secrets Manager vs SSM Parameter Store (for credentials your app must store)

Choose AWS Secrets Manager when:

  • You need managed secret rotation (built-in workflows; rotation can be scheduled frequently). [3]
  • You want purpose-built secrets lifecycle controls (replication, monitoring guidance, secret consumption patterns/agents/extensions). [3]
  • Cost is acceptable for the extra features ($0.40 per secret-month + API calls). [4]

Choose SSM Parameter Store (SecureString) when:

  • The value is configuration or a secret that won’t be rotated automatically (or you’ll build rotation yourself).
  • You want lower cost / simplicity: Standard parameters are no additional charge; Advanced adds per-parameter and API charges. [5]
  • You fit within Parameter Store quotas/feature needs (e.g., parameter tiers, size limits). [6]

Practical recommendations for EC2-based apps

  • AWS-to-AWS auth from EC2 (to S3, DynamoDB, etc.): use IAM role, not stored keys. [1]
  • Database passwords / third-party API keys used by the app on EC2:
    • Use Secrets Manager if you want automatic rotation. [3]
    • Use Parameter Store SecureString if rotation isn’t needed and you want a cheaper config store. [5]
  • SSH access: prefer EC2 Instance Connect rather than distributing/storing SSH keys. [2]

Sources

[1] AWS Well-Architected Framework – Store and use secrets securely (remove/replace/rotate; use IAM roles instead of long-term keys): (docs.aws.amazon.com)
[2] AWS Security Blog – note updated July 2, 2025 recommending EC2 Instance Connect vs managing SSH keys: (aws.amazon.com)
[3] AWS Secrets Manager docs – best practices incl. rotation: (docs.aws.amazon.com)
[4] AWS Secrets Manager pricing: (aws.amazon.com)
[5] AWS Systems Manager pricing – Parameter Store standard vs advanced: (aws.amazon.com)
[6] AWS Systems Manager docs – Parameter Store tiers/limits: (docs.aws.amazon.com)

Citations:


SSM 명령 파라미터에 민감한 정보가 직접 노출됩니다.

env_payload_b64에 DB 크리덴셜이 포함되어 있으며, SendCommand API 호출 시 전달되는 파라미터는 AWS CloudTrail에 기록됩니다. CloudTrail 로그에 접근할 수 있는 권한이 있는 사용자라면 누구나 이 값들을 확인할 수 있습니다. 또한 SSM 명령 실행 기록(Run Command command history)에도 30일 동안 저장되며 콘솔에서 삭제할 수 없습니다. Base64 인코딩은 암호화가 아니므로 보안을 제공하지 않습니다.

해결 방법: DB 크리덴셜은 AWS Secrets Manager 또는 **SSM Parameter Store(SecureString)**에 저장하고, EC2 인스턴스의 IAM 역할을 통해 런타임에 조회하도록 변경해야 합니다.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/deploy-ec2-docker.yml around lines 104 - 190, The
deploy_service function currently builds env_payload_b64 (containing DB
credentials) and embeds it into the aws ssm send-command parameters, which
records sensitive data in CloudTrail/SSM history; instead, stop embedding
secrets in the SSM API call and store DB credentials in AWS Secrets Manager or
SSM Parameter Store (SecureString), give the EC2 instance IAM permissions to
read that secret, and modify deploy_service so the SSM command no longer
includes env_payload_b64 or the echo/base64 decode step but runs a secure fetch
on the instance (e.g., aws secretsmanager get-secret-value or aws ssm
get-parameter --with-decryption) to write /etc/oplust/${env_file} with proper
permissions before running docker; update any related variables/commands
(env_payload_b64, full_env_payload, the send-command commands array entries that
reference base64/echo, and the run_cmd flow) and ensure IAM role/policy and
secret name are used instead of raw credentials.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

이 이슈는 시크릿 키를 깃에서 관리하기때문에 나옵니다.
다만 iam유저가 1개뿐이라 문제없습니다.
+) 추후 aws parameter store로 시크릿 키 관리할 예정입니다.(위 문제 해결됨.)

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@arlen02-01, 이해했습니다. 현재 상황에서는 괜찮다고 판단하셨고, 향후 AWS Parameter Store로 마이그레이션 계획이 있으시군요.

나중에 작업하실 때 참고하실 수 있도록 이 개선 사항을 GitHub 이슈로 등록해 드릴까요? 이 PR과 댓글 링크를 포함해서 생성해 드리겠습니다.


deploy_service "${PROJECT_NAME}-user-ec2" "${ECR_REGISTRY}/oplust-api-user:${IMAGE_TAG}" "oplust-api-user" "/etc/oplust/api-user.env" "8080" "${API_USER_ENV}"
deploy_service "${PROJECT_NAME}-admin-ec2" "${ECR_REGISTRY}/oplust-api-admin:${IMAGE_TAG}" "oplust-api-admin" "/etc/oplust/api-admin.env" "8081" "${API_ADMIN_ENV}"
deploy_service "${PROJECT_NAME}-worker-ec2" "${ECR_REGISTRY}/oplust-transcoder:${IMAGE_TAG}" "oplust-transcoder" "/etc/oplust/transcoder.env" "" "${TRANSCODER_ENV}"