[Multi_K8s-Plugin] Baseline Clean#6607
Conversation
Signed-off-by: Mohammed Firdous <124298708+mohammedfirdouss@users.noreply.github.com>
…ment Signed-off-by: Mohammed Firdous <124298708+mohammedfirdouss@users.noreply.github.com>
Signed-off-by: Mohammed Firdous <124298708+mohammedfirdouss@users.noreply.github.com>
Signed-off-by: Mohammed Firdous <124298708+mohammedfirdouss@users.noreply.github.com>
|
@Warashi I am fixing the merge conflicts on baseline clean too, would wait for your review on this as well. Thanks :) |
Warashi
left a comment
There was a problem hiding this comment.
Please check the kubectl version resolve handling?
|
|
||
| toolRegistry := toolregistry.NewRegistry(input.Client.ToolRegistry()) | ||
|
|
||
| kubectlPath, err := toolRegistry.Kubectl(ctx, cmp.Or(appCfg.Input.KubectlVersion, dt.Config.KubectlVersion)) |
There was a problem hiding this comment.
I'm sorry if I missed the same thing in other PRs, but I think we should use the KubectlVersion in multiTarget config if it's defined.
WDYT?
There was a problem hiding this comment.
Hm. You are right, the baselineClean wasn't respecting the multiTarget.KubectlVersion. I will apply the same priority order as baselineRollout: multiTarget.KubectlVersion > spec.KubectlVersion > deployTarget.KubectlVersion.
0a6d1f6 to
7200bb2
Compare
Merge upstream/master (baseline rollout PR pipe-cd#6606) into feat/k8s-multi-baseline-clean. Combined baseline rollout and baseline clean functions in baseline.go and baseline_test.go. Both stages (K8S_BASELINE_ROLLOUT and K8S_BASELINE_CLEAN) are now present in pipeline.go, plugin.go, and config/application.go. Signed-off-by: Mohammed Firdous <124298708+mohammedfirdouss@users.noreply.github.com>
7200bb2 to
c9f5d45
Compare
Pass multiTarget through to baselineClean and apply the same kubectl version priority as baselineRollout: multiTarget.KubectlVersion > spec.KubectlVersion > deployTarget.KubectlVersion Signed-off-by: Mohammed Firdous <124298708+mohammedfirdouss@users.noreply.github.com>
|
|
||
| type targetConfig struct { | ||
| deployTarget *sdk.DeployTarget[kubeconfig.KubernetesDeployTargetConfig] | ||
| multiTarget *kubeconfig.KubernetesMultiTarget |
There was a problem hiding this comment.
@Warashi Please check from here, If I am on track.
|
|
||
| toolRegistry := toolregistry.NewRegistry(input.Client.ToolRegistry()) | ||
|
|
||
| // Resolve kubectl version: multiTarget > spec > deployTarget |
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## master #6607 +/- ##
==========================================
+ Coverage 29.21% 33.04% +3.82%
==========================================
Files 582 4 -578
Lines 62028 115 -61913
==========================================
- Hits 18121 38 -18083
+ Misses 42515 76 -42439
+ Partials 1392 1 -1391 Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
What this PR does: Adds the
K8S_BASELINE_CLEANstage to the kubernetes_multicluster plugin.After a canary analysis window ends, whether the decision was to promote or roll back, the baseline resources created by K8S_BASELINE_ROLLOUT need to be removed. This stage does exactly that: it finds and deletes all resources labeled pipecd.dev/variant=baseline for the application, across all target clusters in parallel. Without this stage, baseline pods would run indefinitely after the pipeline completes, wasting cluster resources and cluttering kubectl get deployments.
Why we need it:
K8S_BASELINE_ROLLOUTcreates temporary resources (simple-baseline deployment, optionally a simple-baseline service) so you can compare the current version against canary side-by-side. Once the analysis window is over, those resources are useless. This stage removes them cleanly and in the correct order (Services before Workloads) to avoid routing traffic to terminating pods.Which issue(s) this PR fixes: #6446
Does this PR introduce a user-facing change?:
How are users affected by this change: Users can now add
K8S_BASELINE_CLEANto their pipeline config to automatically remove baseline resources after canary analysis completes. Without this stage they would need to manually delete the baseline deployment and service.Is this breaking change: No.
How to migrate (if breaking change): N/A