Skip to content

deploy oss demo app#71

Merged
qasmi merged 2 commits intomainfrom
demo
Dec 8, 2025
Merged

deploy oss demo app#71
qasmi merged 2 commits intomainfrom
demo

Conversation

@qasmi
Copy link
Copy Markdown
Owner

@qasmi qasmi commented Dec 8, 2025

No description provided.

Copy link
Copy Markdown
Collaborator

@my-helper-agent my-helper-agent left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1️⃣ Real-Time Diff Summary

Kind Namespace Name Action
Namespace demo-oss-japan demo-oss-japan CREATE
Service demo-oss-japan game-2048-dao-2048 CREATE
Deployment demo-oss-japan game-2048-dao-2048 CREATE
Application argocd game-2048 CREATE
AppProject argocd demo-oss-japan CREATE

2️⃣ Impact Prediction

  • The Deployment requires pods to be scheduled on nodes with label role=storage and tolerate taint with key=role, value=storage, effect=NoSchedule.
  • Live cluster nodes do not have the label role=storage, making pods unschedulable on any nodes, so pods will remain Pending.
  • Service is of type LoadBalancer; IP provisioning depends on cloud provider capabilities.

3️⃣ Risk Analysis

  • Problem: Deployment containers run as root user violating Pod Security Standards (Restricted) Kyverno policy.
    Impact: Pod creation blocked by admission controller enforcing this policy, causing runtime failure.
    Mitigation: Enable runAsNonRoot: true and define non-root user in container securityContext.

  • Problem: Deployment containers lack CPU and memory resource requests and limits, violating best practices Kyverno policy.
    Impact: Potential unstable resource allocation and scheduling issues.
    Mitigation: Define CPU and memory requests and limits for containers in Deployment.

Copy link
Copy Markdown
Collaborator

@my-helper-agent my-helper-agent left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1️⃣ Real-Time Diff Summary

Kind Namespace Name Action
Namespace (cluster-wide) demo-oss-japan CREATE
Deployment demo-oss-japan game-2048-dao-2048 CREATE
Service demo-oss-japan game-2048-dao-2048 CREATE
AppProject argocd demo-oss-japan CREATE
Application argocd game-2048 CREATE

2️⃣ Impact Prediction

  • The new namespace demo-oss-japan will be created.
  • The game-2048-dao-2048 Deployment will schedule pods only to nodes labeled with role=storage and tolerate a matching NoSchedule taint with key role and value storage.
  • The cluster has 3 nodes: 1 control-plane and 2 workers, none of which have the label role=storage. Hence the pods will be unschedulable and remain Pending.
  • A service of type LoadBalancer will be created for external exposure, but the LoadBalancer IP may take time to be assigned.
  • Kyverno policy violations:
    • Critical: The container is running as root user, which is disallowed by the enforced Pod Security Standards (Restricted).
    • Medium: The container lacks required CPU and memory resource requests and limits, violating best practices policies.
  • These Kyverno violations will cause policy enforcement failure if mutations or validations are applied, blocking the Deployment from progressing.

3️⃣ Risk Analysis

  • Problem: Deployment pods specify nodeSelector: role=storage but cluster nodes lack this label.
    Impact: Pods remain unschedulable indefinitely, causing application downtime.
    Mitigation: Add the label role=storage to one or more nodes intended to run these pods or remove/adjust the nodeSelector in Deployment spec.

  • Problem: Pod runs container as root user, violating the cluster's Pod Security Standards (Restricted).
    Impact: Deployment blocked by Kyverno policy enforcement, preventing pod creation.
    Mitigation: Update Deployment pod security context to run as non-root user; e.g., set runAsNonRoot: true or specify a non-root user ID.

  • Problem: CPU and memory resource requests and limits are missing from container spec.
    Impact: Deployment blocked or flagged by Kyverno best practices policy, potentially preventing scheduling or causing resource contention.
    Mitigation: Specify appropriate resources.requests and resources.limits for CPU and memory in pod spec.

  • Problem: Target namespace demo-oss-japan does not exist in the live cluster and will be created.
    Impact: No immediate risk; however, downstream namespace-specific policies or ResourceQuotas should be checked after namespace creation.
    Mitigation: Confirm intended namespace creation and apply any necessary quota or policy configurations post-creation.

@qasmi qasmi closed this Dec 8, 2025
@qasmi qasmi deleted the demo branch December 8, 2025 05:11
@qasmi qasmi restored the demo branch December 8, 2025 05:13
@qasmi qasmi reopened this Dec 8, 2025
Copy link
Copy Markdown
Collaborator

@my-helper-agent my-helper-agent left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1️⃣ Real-Time Diff Summary

Kind Namespace Name Action
Namespace - demo-oss-japan CREATE
Service demo-oss-japan game-2048-dao-2048 CREATE
Deployment demo-oss-japan game-2048-dao-2048 CREATE
AppProject argocd demo-oss-japan CREATE
Application argocd game-2048 CREATE

2️⃣ Impact Prediction

  • The namespace demo-oss-japan does not currently exist in the live cluster but will be created by the manifest.
  • The Service of type LoadBalancer will provision an external IP but may require time for the LoadBalancer IP to be ready.
  • Deployment pod will be scheduled on nodes labeled with role=storage.
  • Nodes in the cluster are labeled with roles: control-plane and none; none have no role label role=storage. This leads to no nodes matching the nodeSelector role=storage.
  • Hence, the deployment pods will be unschedulable and remain in Pending state due to nodeSelector mismatch.
  • Tolerations specified are valid but irrelevant because no node matches the nodeSelector.
  • Kyverno policy enforcement failed:
    • Privilege escalation is disallowed but the deployment's securityContext lacks an explicit allowPrivilegeEscalation set to false, causing rejection.
    • CPU and memory resource requests and limits are missing from the containers, violating best practices policy.

3️⃣ Risk Analysis

  • Problem: Deployment pods have nodeSelector role=storage but no nodes in cluster have that label.
    Impact: Pods will remain in Pending state indefinitely due to no matching nodes for scheduling.
    Mitigation: Remove or correct the nodeSelector to match existing node labels or label nodes appropriately.

  • Problem: Deployment lacks explicit allowPrivilegeEscalation setting; Kyverno policy disallows privilege escalation.
    Impact: Pod admission will be denied, blocking pod creation and deployment functionality.
    Mitigation: Add securityContext: allowPrivilegeEscalation: false to container spec to comply with the policy.

  • Problem: Deployment containers do not specify CPU and memory resource requests and limits.
    Impact: Kyverno policy rejects the pod creation, blocking deployment.
    Mitigation: Define explicit CPU and memory resource requests and limits for containers in the deployment.

  • Problem: Namespace demo-oss-japan does not currently exist in live cluster.
    Impact: Resources targeting this namespace cannot be created until the namespace exists.
    Mitigation: The manifest creates the namespace, so ensure namespace creation is prioritized before other resources deploy.

Copy link
Copy Markdown
Collaborator

@my-helper-agent my-helper-agent left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1️⃣ Real-Time Diff Summary

Kind Namespace Name Action
Namespace demo-oss-japan CREATE
Service demo-oss-japan game-2048-dao-2048 CREATE
Deployment demo-oss-japan game-2048-dao-2048 CREATE

2️⃣ Impact Prediction

  • The new namespace demo-oss-japan will be created.
  • The Helm chart dao-2048 version 1.2.1 will deploy a Deployment and a LoadBalancer Service named game-2048-dao-2048 in the demo-oss-japan namespace.
  • The Deployment's pod has resource requests and limits of 100m CPU and 128Mi memory.
  • The Service is of type LoadBalancer, so external IP allocation may take some time after deployment.
  • Security context disables privilege escalation for the container, enhancing security.
  • Since the namespace does not currently exist in the live cluster, the workload have no existing resources and environment impact yet.

3️⃣ Risk Analysis

  • Problem: Target namespace demo-oss-japan does not currently exist in the live cluster.
    Impact: The Application resources cannot be created before the namespace creation completes, causing deployment block.
    Mitigation: Create namespace demo-oss-japan in the cluster before or as part of the deployment.

  • No Kyverno policy violations detected on the rendered manifest resources.

  • No resource quota found for demo-oss-japan, no quota violations expected.

  • No scheduling constraints detected; cluster nodes available and ready.

  • All referenced ConfigMaps, Secrets, and PVCs are either non-existent or not used, so no missing dependencies detected.


Submitted GitOps validation report for PR 71.

@qasmi qasmi merged commit 0406a36 into main Dec 8, 2025
1 check passed
@qasmi qasmi deleted the demo branch December 8, 2025 05:17
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants