Skip to content

oss demo app#65

Closed
qasmi wants to merge 1 commit intomainfrom
demo
Closed

oss demo app#65
qasmi wants to merge 1 commit intomainfrom
demo

Conversation

@qasmi
Copy link
Copy Markdown
Owner

@qasmi qasmi commented Dec 7, 2025

No description provided.

Copy link
Copy Markdown
Collaborator

@my-helper-agent my-helper-agent left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1️⃣ Real-Time Diff Summary

Kind Namespace Name Action
Namespace demo-oss-japan demo-oss-japan CREATE
Service demo-oss-japan game-2048-dao-2048 CREATE
Deployment demo-oss-japan game-2048-dao-2048 CREATE
AppProject argocd demo-oss-japan CREATE
Application argocd game-2048 CREATE

2️⃣ Impact Prediction

  • Namespace demo-oss-japan will be created.
  • Deployment game-2048-dao-2048 demands scheduling on nodes with label role: storage and tolerates NoSchedule taint for role=storage.
  • Nodes in the cluster do NOT have the label role: storage; scheduling will fail.
  • Service type is LoadBalancer; ensure cloud provider supports it or external IP allocation might delay.

3️⃣ Risk Analysis

  • Problem: Deployment game-2048-dao-2048 runs container as root (no non-root user specified) violating the require-run-as-nonroot Kyverno policy.
    Impact: Pod creation and scheduling will fail due to security policy rejection.
    Mitigation: Specify a non-root runAsUser or configure the container security context accordingly.

  • Problem: Deployment game-2048-dao-2048 missing CPU and memory resource requests and limits violating require-requests-limits Kyverno policy.
    Impact: Pod may be admitted but risk resource contention, and may be blocked in clusters enforcing this policy.
    Mitigation: Add resource requests and limits in container spec to comply with policy.

  • Problem: Target namespace demo-oss-japan does not exist in the live cluster.
    Impact: Deployment and Service creation may be blocked until namespace is created.
    Mitigation: Create the namespace prior to deploying workloads, or ensure ArgoCD creates it before sync.

  • Problem: Scheduling failure due to no nodes labeled with role=storage to match nodeSelector and tolerations.
    Impact: Pods will remain in Pending state indefinitely.
    Mitigation: Add nodes with label role=storage or adjust nodeSelector and tolerations accordingly.

Copy link
Copy Markdown
Collaborator

@my-helper-agent my-helper-agent left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1️⃣ Real-Time Diff Summary

Kind Namespace Name Action
Namespace demo-oss-japan CREATE
Service demo-oss-japan game-2048-dao-2048 CREATE
Deployment demo-oss-japan game-2048-dao-2048 CREATE
Application argocd game-2048 CREATE
AppProject argocd demo-oss-japan CREATE

2️⃣ Impact Prediction

  • The namespace demo-oss-japan does not currently exist in the live cluster and will be created.
  • The Deployment pod template requires scheduling on nodes with label role:storage. No nodes in the cluster have this label, meaning the pod will remain in Pending state due to unschedulable node selector.
  • The Deployment tolerates NoSchedule taint key role=storage, but since no node satisfies nodeSelector: role=storage, the pod cannot be scheduled.
  • Service of type LoadBalancer will be created; depending on cluster provider, external IP allocation may take some time but no blocking issues are expected.
  • The Deployment container runs as root user (no runAsNonRoot specified).
  • No CPU or memory resource requests and limits are specified on the container.
  • Kyverno policies flag critical errors for:
    • Container running as root disallowed by Pod Security Standards (Restricted).
    • Missing CPU and memory resource requests and limits.
  • These policy violations will block admission in clusters enforcing these policies.

3️⃣ Risk Analysis

  • Problem: Target namespace demo-oss-japan is missing in the live cluster and not confirmed created before Application deployment.
    Impact: Application creation and resource deployment will fail due to non-existent target namespace.
    Mitigation: Ensure namespace demo-oss-japan is created and active before Application synchronization.

  • Problem: Deployment pod nodeSelector requires role: storage label which no current node has.
    Impact: Pod will remain unschedulable Pending indefinitely.
    Mitigation: Add label role=storage to at least one node in the cluster or adjust nodeSelector accordingly.

  • Problem: Deployment pod securityContext lacks runAsNonRoot: true and container runs as root user.
    Impact: Admission controller or Kyverno Pod Security policy will reject pod creation, causing deployment failure.
    Mitigation: Add securityContext.runAsNonRoot: true to pod/containers to comply with Pod Security Standards.

  • Problem: Deployment container lacks CPU and memory resource requests and limits.
    Impact: Kyverno policies enforcing resource constraints will block deployment; unstable resource allocation.
    Mitigation: Specify CPU and memory resource requests and limits in the Deployment container spec.

Copy link
Copy Markdown
Collaborator

@my-helper-agent my-helper-agent left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1️⃣ Real-Time Diff Summary

Kind Namespace Name Action
Namespace demo-oss-japan CREATE
Application argocd game-2048 CREATE

2️⃣ Impact Prediction

  • The namespace "demo-oss-japan" will be created.
  • The ArgoCD Application "game-2048" will deploy the dao-2048 Helm chart version 1.2.1 into "demo-oss-japan".
  • The deployment will schedule pods on nodes with label role=storage and tolerate the NoSchedule taint for the same key.
  • The Service will be of type LoadBalancer and expose port 80.
  • Deployment pods will NOT run as non-root user, violating pod security policies.
  • Deployment containers lack CPU and memory resource requests and limits, violating best practices and EKS policies.
  • The "demo-oss-japan" namespace does not currently exist in the live cluster, so resources will be newly created.
  • No ResourceQuota or other limiting resources exist in the "demo-oss-japan" namespace.
  • No existing ServiceAccounts, ConfigMaps, Secrets, PVCs, or NetworkPolicies found in "demo-oss-japan", so no dependency conflicts.
  • The target nodes are available and match the nodeSelector and tolerations for scheduling.

3️⃣ Risk Analysis

  • Problem: Deployment pods run as root user contrary to cluster Pod Security Standards (Restricted).
    Impact: Runtime failure due to Kyverno policy enforcement denying pod creation.
    Mitigation: Configure securityContext.runAsNonRoot=true for the pod and containers in the Helm chart values or override parameters.

  • Problem: Missing CPU and memory resource requests and limits on containers violate Kyverno best practices.
    Impact: Potential resource contention and degraded cluster reliability; Kyverno policy may block or warn on deployment.
    Mitigation: Define CPU and memory requests and limits explicitly in Helm values to comply with cluster policies.

Copy link
Copy Markdown
Collaborator

@my-helper-agent my-helper-agent left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1️⃣ Real-Time Diff Summary

Kind Namespace Name Action
Namespace demo-oss-japan CREATE
Service demo-oss-japan game-2048-dao-2048 CREATE
Deployment demo-oss-japan game-2048-dao-2048 CREATE
Application argocd game-2048 CREATE
AppProject argocd demo-oss-japan CREATE

2️⃣ Impact Prediction

  • The namespace "demo-oss-japan" does not currently exist in the cluster and will be created.
  • The deployment "game-2048-dao-2048" requires scheduling to nodes with label role=storage and tolerates taints with key role, value storage, effect NoSchedule.
  • Current live cluster nodes labels:
    • kind-control-plane node has no label role=storage and has a NoSchedule taint on node-role.kubernetes.io/control-plane (no matching toleration).
    • kind-worker and kind-worker2 nodes have no label role=storage.
  • Hence, no nodes in the cluster currently match the nodeSelector role=storage.
  • Deployment pods will remain in Pending state due to unschedulable constraints until nodes with label role=storage are available.
  • The service "game-2048-dao-2048" is exposed as LoadBalancer type; availability depends on external LoadBalancer integration.
  • Kyverno policy violations detected on deployment:
    • Critical: Container runs as root (securityContext.runAsNonRoot not set).
    • Medium: CPU and memory resource requests and limits are not specified.

3️⃣ Risk Analysis

  • Problem: Deployment requires scheduling on nodes labeled role=storage, but no such nodes exist.
    Impact: Pods will be stuck in Pending forever due to lack of matching nodes.
    Mitigation: Add label role=storage to one or more cluster nodes or modify deployment to use existing node labels.

  • Problem: Deployment securityContext does not set runAsNonRoot.
    Impact: Kyverno cause critical policy rejection, blocking pod creation and runtime failure.
    Mitigation: Configure deployment pod's securityContext with runAsNonRoot: true to comply with security policies.

  • Problem: Deployment containers lack resource requests and limits.
    Impact: Medium priority Kyverno violation; may cause resource management issues and policy enforcement failures.
    Mitigation: Define appropriate CPU and memory resource requests and limits in the deployment spec.

Summary

This PR introduces new namespace and deployment resources that will fail to run due to node scheduling constraints and critical Kyverno policy violations. These issues must be addressed before merging to avoid deployment breakage and runtime failures.

@qasmi qasmi closed this Dec 7, 2025
@qasmi qasmi deleted the demo branch December 7, 2025 08:32
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants