Conversation
- Update build.yaml with branch-aware image tagging (branch-sha format) - Add update-manifest.yaml to update k8s-config manifests - Backup old deploy.yaml (no longer needed with GitOps) Refs: - k8s-config/argocd/GITOPS_CI_CD_WORKFLOW.md - k8s-config/argocd/SERVICE_MIGRATION_GUIDE.md
chore: migrate to GitOps workflow with ArgoCD
WalkthroughThe pull request establishes a multi-stage CI/CD pipeline for the authentication service. The build workflow is updated to support branch-aware Docker image tagging on main and dev branches with artifact name changes. Two new deployment workflows are introduced: one to apply Kubernetes deployments after successful builds, and another to automate manifest synchronization across branches using branch-specific image tags. Changes
Sequence Diagram(s)sequenceDiagram
participant Developer
participant GitHub as GitHub<br/>(Main/Dev)
participant BuildWF as Build Workflow
participant Registry as Docker<br/>Registry
participant DeployWF as Deploy Workflow
participant ManifestWF as Manifest<br/>Workflow
participant K8sRepo as k8s-config<br/>Repository
participant K8sCluster as Kubernetes<br/>Cluster
Developer->>GitHub: Push to main/dev
GitHub->>BuildWF: Trigger build (branch-aware)
BuildWF->>Registry: Build & push image<br/>(ghcr.io/.../auth:branch-sha)
BuildWF-->>GitHub: Mark build complete
GitHub->>DeployWF: Trigger on build success
DeployWF->>K8sRepo: Checkout k8s-config
DeployWF->>K8sCluster: Update & apply<br/>deployment manifest
DeployWF->>K8sCluster: Wait for rollout
GitHub->>ManifestWF: Trigger on build success
ManifestWF->>K8sRepo: Checkout (branch-matched)
ManifestWF->>K8sRepo: Update manifest with<br/>new image tag (yq)
ManifestWF->>K8sRepo: Commit & push changes
Note over ManifestWF,K8sRepo: Bot-driven commit<br/>to matching branch
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 inconclusive)
✅ Passed checks (2 passed)
✨ Finishing touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Pull Request Overview
This PR transitions the authentication service from direct Kubernetes deployment to a GitOps-based approach using ArgoCD. The changes introduce branch-aware Docker image tagging and a manifest update workflow that automatically synchronizes with a separate k8s-config repository.
Key Changes
- GitOps workflow: New
update-manifest.yamlworkflow that updates Kubernetes manifests in a separate repository instead of directly deploying to the cluster - Branch-aware tagging: Updated
build.yamlto create Docker images with tags in the format{branch}-{sha}(e.g.,dev-abc1234,main-xyz5678) - Workflow modernization: The old direct deployment workflow is archived as
deploy.yaml.old
Reviewed Changes
Copilot reviewed 3 out of 3 changed files in this pull request and generated 12 comments.
| File | Description |
|---|---|
.github/workflows/update-manifest.yaml |
New GitOps workflow that updates image tags in k8s-config repository and commits changes for ArgoCD to sync |
.github/workflows/deploy.yaml.old |
Archived version of the old direct-deployment workflow for reference |
.github/workflows/build.yaml |
Updated to support branch-aware Docker image tagging with {branch}-{sha} format and generic artifact naming |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| SERVICE_NAME: "authentication" # e.g., "timelogging_service", "frontend_web", "authentication" | ||
| DEPLOYMENT_FILE: "auth-deployment.yaml" # e.g., "timelogging-deployment.yaml", "frontend-deployment.yaml" |
There was a problem hiding this comment.
Hardcoded service-specific values should be parameterized. These SERVICE_NAME and DEPLOYMENT_FILE values are specific to the authentication service and would need to be manually changed for each microservice. Consider using repository variables or deriving these from the repository name to make this workflow reusable across different microservices.
| uses: docker/metadata-action@v5 | ||
| with: | ||
| images: ghcr.io/${{ github.repository }} # e.g., ghcr.io/randitha/Authentication | ||
| images: ghcr.io/techtorque-2025/authentication |
There was a problem hiding this comment.
Hardcoded repository-specific image name. The image name ghcr.io/techtorque-2025/authentication is hardcoded and specific to this authentication service. For a reusable template workflow (as indicated by the file comments), this should use github.repository_owner and derive the service name dynamically. For example:
images: ghcr.io/${{ github.repository }}This would make the workflow truly reusable across microservices.
| images: ghcr.io/techtorque-2025/authentication | |
| images: ghcr.io/${{ github.repository }} |
| uses: actions/download-artifact@v4 | ||
| with: | ||
| name: auth-service-jar | ||
| name: service-jar |
There was a problem hiding this comment.
Inconsistent artifact naming between upload and download. The artifact is uploaded with name service-jar (line 52) but this creates a potential issue if the workflow template is copied across multiple repositories without changes, as artifact names should be unique within a workflow run. Both occurrences should use a consistent, repository-specific naming pattern.
| name: service-jar | |
| name: service-jar-${{ github.event.repository.name }} |
| # For Node.js/Next.js services (Frontend): | ||
| # - name: Use Node.js and cache npm | ||
| # uses: actions/setup-node@v4 | ||
| # with: | ||
| # node-version: '22' | ||
| # cache: 'npm' | ||
| # | ||
| # - name: Install dependencies | ||
| # run: npm ci | ||
| # | ||
| # - name: Run linter | ||
| # run: npm run lint | ||
| # | ||
| # - name: Build | ||
| # run: npm run build | ||
|
|
There was a problem hiding this comment.
Commented-out code should be removed or properly documented. Large blocks of commented Node.js/Frontend code (lines 55-69) add clutter and reduce maintainability. If this workflow is meant to be a template, consider:
- Creating separate workflow files for Java and Node.js services, or
- Using a matrix strategy to handle different service types, or
- Moving template examples to documentation rather than commented code
| # For Node.js/Next.js services (Frontend): | |
| # - name: Use Node.js and cache npm | |
| # uses: actions/setup-node@v4 | |
| # with: | |
| # node-version: '22' | |
| # cache: 'npm' | |
| # | |
| # - name: Install dependencies | |
| # run: npm ci | |
| # | |
| # - name: Run linter | |
| # run: npm run lint | |
| # | |
| # - name: Build | |
| # run: npm run build |
| flavor: | | ||
| latest=false |
There was a problem hiding this comment.
The flavor: latest=false configuration conflicts with the latest tag definition. On line 105, you explicitly create a latest tag for the default branch, but then on line 107 you set flavor: latest=false. This configuration is contradictory. If you want to control the latest tag manually (as you're doing on line 105), the flavor: latest=false is correct. However, verify this is the intended behavior, as it may cause confusion.
| tags: | | ||
| type=sha,prefix= | ||
| # Branch + short SHA (e.g., dev-abc1234 or main-xyz5678) | ||
| type=raw,value=${{ steps.branch.outputs.name }}-{{sha}},enable=true |
There was a problem hiding this comment.
Invalid type=raw syntax with Docker metadata action. The {{sha}} placeholder syntax is not valid for type=raw in docker/metadata-action v5. For raw tags, you need to use the format with prefix or construct the full value using available variables. The correct approach would be:
type=raw,value=${{ steps.branch.outputs.name }}-{{sha}}However, {{sha}} is only available for certain tag types like type=sha. Consider using:
type=sha,prefix=${{ steps.branch.outputs.name }}-,format=short| type=raw,value=${{ steps.branch.outputs.name }}-{{sha}},enable=true | |
| type=sha,prefix=${{ steps.branch.outputs.name }}-,format=short,enable=true |
| uses: actions/upload-artifact@v4 | ||
| with: | ||
| name: auth-service-jar | ||
| name: service-jar |
There was a problem hiding this comment.
Artifact name changed from auth-service-jar to generic service-jar. While this makes the workflow more generic, it could cause issues if multiple services use the same artifact name in parallel workflows. The previous name auth-service-jar was more specific and less likely to collide. Consider using a name that includes the repository name or a unique identifier:
name: ${{ github.event.repository.name }}-jar| name: service-jar | |
| name: ${{ github.event.repository.name }}-jar |
|
|
||
| - name: Install yq (YAML processor) | ||
| run: | | ||
| sudo wget -qO /usr/bin/yq https://github.com/mikefarah/yq/releases/latest/download/yq_linux_amd64 |
There was a problem hiding this comment.
Missing security flag for wget command. The wget command uses -qO but doesn't verify the downloaded binary's integrity. Consider adding checksum verification or using the -q with --secure-protocol option. Additionally, downloading from latest can lead to non-reproducible builds. Consider pinning to a specific version:
VERSION="v4.35.1" # or latest stable version
sudo wget -qO /usr/bin/yq https://github.com/mikefarah/yq/releases/download/${VERSION}/yq_linux_amd64
echo "expected_checksum /usr/bin/yq" | sha256sum -c
sudo chmod +x /usr/bin/yq| sudo wget -qO /usr/bin/yq https://github.com/mikefarah/yq/releases/latest/download/yq_linux_amd64 | |
| YQ_VERSION="v4.35.1" | |
| YQ_BINARY_URL="https://github.com/mikefarah/yq/releases/download/${YQ_VERSION}/yq_linux_amd64" | |
| YQ_CHECKSUM_URL="https://github.com/mikefarah/yq/releases/download/${YQ_VERSION}/checksums" | |
| sudo wget -qO /usr/bin/yq "${YQ_BINARY_URL}" | |
| wget -qO /tmp/yq_checksums "${YQ_CHECKSUM_URL}" | |
| grep "yq_linux_amd64" /tmp/yq_checksums | sha256sum -c --ignore-missing |
| # REPLACEMENTS NEEDED: | ||
| # - auth-service: e.g., "auth-service", "time-logging-service" (for Java services) | ||
| # - authentication: e.g., "authentication", "timelogging_service", "frontend_web" | ||
| # - Uncomment Node.js steps for Frontend_Web |
There was a problem hiding this comment.
Implementation notes should not be in production workflow files. Lines 133-136 contain replacement instructions that are meant for developers implementing this template. These comments should either be:
- Removed once the workflow is configured for a specific service, or
- Moved to a separate README or documentation file
Having these in the workflow file suggests this is still a template that hasn't been properly customized for the authentication service.
| # REPLACEMENTS NEEDED: | |
| # - auth-service: e.g., "auth-service", "time-logging-service" (for Java services) | |
| # - authentication: e.g., "authentication", "timelogging_service", "frontend_web" | |
| # - Uncomment Node.js steps for Frontend_Web |
| path: 'k8s-config' | ||
|
|
There was a problem hiding this comment.
Potential branch checkout failure when k8s-config branch doesn't exist. If a microservice pushes to a branch (e.g., feature-branch) that doesn't exist in the k8s-config repository, this checkout will fail. Consider adding error handling or defaulting to a base branch (e.g., dev or main) if the matching branch doesn't exist:
- name: Checkout k8s-config repo (matching branch)
uses: actions/checkout@v4
with:
repository: 'TechTorque-2025/k8s-config'
token: ${{ secrets.REPO_ACCESS_TOKEN }}
ref: ${{ steps.info.outputs.branch }}
path: 'k8s-config'
continue-on-error: true
- name: Fallback to main branch if needed
if: failure()
uses: actions/checkout@v4
with:
repository: 'TechTorque-2025/k8s-config'
token: ${{ secrets.REPO_ACCESS_TOKEN }}
ref: 'main'
path: 'k8s-config'| path: 'k8s-config' | |
| path: 'k8s-config' | |
| continue-on-error: true | |
| - name: Fallback to main branch if needed | |
| if: failure() | |
| uses: actions/checkout@v4 | |
| with: | |
| repository: 'TechTorque-2025/k8s-config' | |
| token: ${{ secrets.REPO_ACCESS_TOKEN }} | |
| ref: 'main' | |
| path: 'k8s-config' |
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
.github/workflows/build.yaml (1)
21-47: Job name doesn't match actual behavior.The job is named "Build and Test" but Line 47 explicitly skips tests with
-DskipTests. Either rename the job to "Build" or remove the-DskipTestsflag to actually run tests.Apply this diff to fix the job name:
- build-test: - name: Build and Test + build-test: + name: BuildOr alternatively, remove
-DskipTeststo actually run tests:- name: Build with Maven - run: mvn -B clean package -DskipTests --file auth-service/pom.xml + run: mvn -B clean package --file auth-service/pom.xml
🧹 Nitpick comments (3)
.github/workflows/build.yaml (2)
75-75: Verify branch condition syntax for consistency.The condition uses explicit ref comparisons. Consider using
github.ref_namefor cleaner branch name access, which would simplify the condition.- if: github.event_name == 'push' && (github.ref == 'refs/heads/main' || github.ref == 'refs/heads/dev') + if: github.event_name == 'push' && (github.ref_name == 'main' || github.ref_name == 'dev')
133-136: Update or remove placeholder comments.The replacement instructions at the end should either be removed (if replacements are already done) or moved to a separate documentation file to avoid confusion.
.github/workflows/update-manifest.yaml (1)
36-39: Consider using action for yq installation.Installing yq manually via wget works but using a GitHub Action provides better caching and version control.
- - name: Install yq (YAML processor) - run: | - sudo wget -qO /usr/bin/yq https://github.com/mikefarah/yq/releases/latest/download/yq_linux_amd64 - sudo chmod +x /usr/bin/yq + - name: Install yq (YAML processor) + uses: mikefarah/yq@v4
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
.github/workflows/build.yaml(2 hunks).github/workflows/deploy.yaml.old(1 hunks).github/workflows/update-manifest.yaml(1 hunks)
🔇 Additional comments (7)
.github/workflows/build.yaml (3)
89-94: LGTM! Branch extraction is clear and well-documented.The manual branch extraction with output and logging provides good visibility for debugging.
100-107: No changes needed—SHA placeholder format is correct.The
{{sha}}placeholder in docker/metadata-action v5 produces a short SHA by default (7 characters), which matches the format expected byupdate-manifest.yaml.
119-119: Dockerfile location confirmed.The root Dockerfile exists at the specified context (
.) and is correctly configured for building the auth-service. The file references auth-service paths explicitly, and the workflow configuration is consistent with this setup..github/workflows/update-manifest.yaml (4)
19-26: LGTM! Branch and SHA extraction is correct.The extraction of branch name and short SHA from the workflow_run event correctly matches the format used in build.yaml.
47-47: Image tag format must match build.yaml output.This line constructs the image tag as
branch-sha(e.g.,dev-abc1234). Ensure this format exactly matches what build.yaml generates, particularly the SHA length and separator.Based on the earlier concern about docker/metadata-action's
{{sha}}placeholder, verify that both workflows produce consistent tag formats. If build.yaml uses a long SHA or different format, this update will fail to match the actual image.
68-71: LGTM! Safe handling of no-changes scenario.The check for cached changes before committing prevents unnecessary commits and handles the edge case gracefully.
32-32: Document REPO_ACCESS_TOKEN secret setup and verify it exists in GitHub Actions settings.Verification confirms the workflow uses
${{ secrets.REPO_ACCESS_TOKEN }}at line 32 to checkout the k8s-config repository with write access, but the secret is not documented in the repository. No setup instructions exist for developers or maintainers. Manually verify this secret is configured at the GitHub organization or repository level in Actions settings, and add documentation explaining its setup and required permissions.
| # Authentication/.github/workflows/deploy.yml | ||
|
|
||
| name: Deploy Auth Service to Kubernetes | ||
|
|
||
| on: | ||
| workflow_run: | ||
| # This MUST match the 'name:' of your build.yml file | ||
| workflows: ["Build and Package Service"] | ||
| types: | ||
| - completed | ||
| branches: | ||
| - 'main' | ||
| - 'devOps' | ||
|
|
||
| jobs: | ||
| deploy: | ||
| name: Deploy Auth Service to Kubernetes | ||
| # We only deploy if the build job was successful | ||
| if: ${{ github.event.workflow_run.conclusion == 'success' }} | ||
| runs-on: ubuntu-latest | ||
|
|
||
| steps: | ||
| # We only need the SHA of the new image | ||
| - name: Get Commit SHA | ||
| id: get_sha | ||
| run: | | ||
| echo "sha=$(echo ${{ github.event.workflow_run.head_sha }} | cut -c1-7)" >> $GITHUB_OUTPUT | ||
|
|
||
| # 1. Checkout your new 'k8s-config' repository | ||
| - name: Checkout K8s Config Repo | ||
| uses: actions/checkout@v4 | ||
| with: | ||
| # This points to your new repo | ||
| repository: 'TechTorque-2025/k8s-config' | ||
| # This uses the org-level secret you created | ||
| token: ${{ secrets.REPO_ACCESS_TOKEN }} | ||
| # We'll put the code in a directory named 'config-repo' | ||
| path: 'config-repo' | ||
| # --- NEW LINE --- | ||
| # Explicitly checkout the 'main' branch | ||
| ref: 'main' | ||
|
|
||
| - name: Install kubectl | ||
| uses: azure/setup-kubectl@v3 | ||
|
|
||
| - name: Install yq | ||
| run: | | ||
| sudo wget https://github.com/mikefarah/yq/releases/latest/download/yq_linux_amd64 -O /usr/bin/yq | ||
| sudo chmod +x /usr/bin/yq | ||
|
|
||
| - name: Set Kubernetes context | ||
| uses: azure/k8s-set-context@v4 | ||
| with: | ||
| kubeconfig: ${{ secrets.KUBE_CONFIG_DATA }} # This uses your Org-level secret | ||
|
|
||
| # 2. Update the image tag for the *authentication* service | ||
| - name: Update image tag in YAML | ||
| run: | | ||
| yq -i '(select(.kind == "Deployment") | .spec.template.spec.containers[0].image) = "ghcr.io/techtorque-2025/authentication:${{ steps.get_sha.outputs.sha }}"' config-repo/k8s/services/auth-deployment.yaml | ||
|
|
||
| # --- NEW DEBUGGING STEP --- | ||
| - name: Display file contents before apply | ||
| run: | | ||
| echo "--- Displaying k8s/services/auth-deployment.yaml ---" | ||
| cat config-repo/k8s/services/auth-deployment.yaml | ||
| echo "------------------------------------------------------" | ||
|
|
||
| # 3. Deploy the updated file | ||
| - name: Deploy to Kubernetes | ||
| run: | | ||
| kubectl apply -f config-repo/k8s/services/auth-deployment.yaml | ||
| kubectl rollout status deployment/auth-deployment No newline at end of file |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Clarify the status of this .old file.
The .old extension suggests this workflow is deprecated, likely replaced by update-manifest.yaml which follows GitOps principles (commit to k8s-config, let ArgoCD sync) rather than direct kubectl apply.
If this workflow is no longer needed:
- Delete the file entirely to avoid confusion
- Document the migration in the PR description or README
If it's kept for reference or rollback purposes:
- Disable the workflow by removing the
on:triggers - Add a prominent comment at the top explaining its status
- Consider moving it to a
docs/or.archive/directory
# DEPRECATED: This workflow has been replaced by update-manifest.yaml
# Kept for reference only - DO NOT USE
# See: update-manifest.yaml for the current GitOps-based deployment approach🤖 Prompt for AI Agents
In .github/workflows/deploy.yaml.old around lines 1-72 the file appears to be an
outdated deployment workflow (note the .old suffix) causing potential confusion
with the current GitOps flow; either remove the file if it's fully deprecated
and add a short note in the PR description/README about the migration to
update-manifest.yaml, or if you must keep it for reference disable all triggers
(remove the on: block), add a prominent top-of-file comment stating "DEPRECATED:
replaced by update-manifest.yaml — kept for reference only; do not use", and
optionally move the file into a docs/ or .archive/ directory to avoid accidental
execution.
| - completed | ||
| branches: | ||
| - 'main' | ||
| - 'devOps' |
There was a problem hiding this comment.
Critical: devOps branch removed from build.yaml.
This workflow triggers on the devOps branch, but build.yaml (Line 9-10) no longer builds images for this branch. This creates a mismatch where deployments may trigger without corresponding images.
Either update build.yaml to include devOps or remove this branch reference:
branches:
- 'main'
- - 'devOps'📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - 'devOps' | |
| branches: | |
| - 'main' |
🤖 Prompt for AI Agents
In .github/workflows/deploy.yaml.old around line 13 (the '- 'devOps'' entry)
there is a branch trigger mismatch: build.yaml (lines 9-10) no longer builds
images for devOps, so deployments can fire without a corresponding image; fix by
either adding devOps back into build.yaml’s image build matrix (so builds
produce images for devOps) or remove the 'devOps' branch entry from deploy.yaml
(and any other workflow files) so triggers and build.yaml remain consistent;
ensure after change you run a quick CI dry-run or validate workflow files to
confirm triggers align.
| # 2. Update the image tag for the *authentication* service | ||
| - name: Update image tag in YAML | ||
| run: | | ||
| yq -i '(select(.kind == "Deployment") | .spec.template.spec.containers[0].image) = "ghcr.io/techtorque-2025/authentication:${{ steps.get_sha.outputs.sha }}"' config-repo/k8s/services/auth-deployment.yaml |
There was a problem hiding this comment.
Critical: Image tag format incompatible with new tagging scheme.
This workflow uses SHA-only tags (authentication:abc1234), but build.yaml now generates branch-prefixed tags (authentication:main-abc1234 or authentication:dev-abc1234). This mismatch will cause image pull failures.
Update the image reference to match the new format from build.yaml:
- name: Update image tag in YAML
run: |
- yq -i '(select(.kind == "Deployment") | .spec.template.spec.containers[0].image) = "ghcr.io/techtorque-2025/authentication:${{ steps.get_sha.outputs.sha }}"' config-repo/k8s/services/auth-deployment.yaml
+ BRANCH="${{ github.event.workflow_run.head_branch }}"
+ yq -i '(select(.kind == "Deployment") | .spec.template.spec.containers[0].image) = "ghcr.io/techtorque-2025/authentication:'"${BRANCH}"'-${{ steps.get_sha.outputs.sha }}"' config-repo/k8s/services/auth-deployment.yamlHowever, given that update-manifest.yaml appears to replace this workflow with a GitOps approach, consider removing this file entirely.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| yq -i '(select(.kind == "Deployment") | .spec.template.spec.containers[0].image) = "ghcr.io/techtorque-2025/authentication:${{ steps.get_sha.outputs.sha }}"' config-repo/k8s/services/auth-deployment.yaml | |
| - name: Update image tag in YAML | |
| run: | | |
| BRANCH="${{ github.event.workflow_run.head_branch }}" | |
| yq -i '(select(.kind == "Deployment") | .spec.template.spec.containers[0].image) = "ghcr.io/techtorque-2025/authentication:'"${BRANCH}"'-${{ steps.get_sha.outputs.sha }}"' config-repo/k8s/services/auth-deployment.yaml |
| SERVICE_NAME: "authentication" # e.g., "timelogging_service", "frontend_web", "authentication" | ||
| DEPLOYMENT_FILE: "auth-deployment.yaml" # e.g., "timelogging-deployment.yaml", "frontend-deployment.yaml" |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Eliminate hardcoded service-specific values.
The SERVICE_NAME (authentication) and DEPLOYMENT_FILE (auth-deployment.yaml) are hardcoded in multiple locations despite comments suggesting they should be parameterized. This makes the "template" non-reusable across different services.
Consider using repository variables or extracting from repository name:
- name: Update image tag in deployment manifest
env:
- SERVICE_NAME: "authentication" # e.g., "timelogging_service", "frontend_web", "authentication"
- DEPLOYMENT_FILE: "auth-deployment.yaml" # e.g., "timelogging-deployment.yaml", "frontend-deployment.yaml"
+ SERVICE_NAME: ${{ vars.SERVICE_NAME }}
+ DEPLOYMENT_FILE: ${{ vars.DEPLOYMENT_FILE }}And update the summary step:
echo "- **Branch**: ${{ steps.info.outputs.branch }}" >> $GITHUB_STEP_SUMMARY
echo "- **Image Tag**: ${{ steps.info.outputs.branch }}-${{ steps.info.outputs.sha }}" >> $GITHUB_STEP_SUMMARY
- echo "- **Manifest Updated**: k8s/services/auth-deployment.yaml" >> $GITHUB_STEP_SUMMARY
+ echo "- **Manifest Updated**: k8s/services/${{ env.DEPLOYMENT_FILE }}" >> $GITHUB_STEP_SUMMARY
echo "- **Next Step**: ArgoCD will sync this change to the cluster" >> $GITHUB_STEP_SUMMARYAlso applies to: 60-60, 87-87
🤖 Prompt for AI Agents
.github/workflows/update-manifest.yaml lines 43-44 (and also adjust occurrences
at lines 60 and 87): the SERVICE_NAME and DEPLOYMENT_FILE are hardcoded; change
them to use workflow inputs or repository/organization variables (e.g., inputs
defined in the workflow or GitHub repository secrets/variables or derive from
github.repository) so the file becomes a reusable template; replace hardcoded
values with references to the chosen inputs/vars, ensure parsing logic derives
SERVICE_NAME from the repo name if needed, and update the summary step to
reference the new variables instead of literal
"authentication"/"auth-deployment.yaml".
Summary by CodeRabbit