Skip to content

Conversation

@gangwgr
Copy link
Contributor

@gangwgr gangwgr commented Feb 2, 2026

Auth network policy e2e cases

@openshift-ci openshift-ci bot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Feb 2, 2026
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Feb 2, 2026

Skipping CI for Draft Pull Request.
If you want CI signal for your change, please convert it to an actual PR.
You can still manually trigger a test run with /test all

@coderabbitai
Copy link

coderabbitai bot commented Feb 2, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review

Walkthrough

Adds two new Ginkgo-based end-to-end test suites that validate, reconcile, and enforce Kubernetes NetworkPolicy behavior for the authentication operator and OAuth API server; promotes github.com/onsi/gomega to a direct module dependency; and registers an additional non-serial test suite in the test runner.

Changes

Cohort / File(s) Summary
E2E NetworkPolicy Validation
test/e2e/network_policy.go
New Ginkgo test file validating authentication-operator NetworkPolicies and reconciliation: verifies default-deny policies, pod selectors, ingress/egress ports and sources, namespace-based rules, mutation/restoration flows, event logging, and pod readiness helpers.
E2E NetworkPolicy Enforcement
test/e2e/network_policy_enforcement.go
New Ginkgo test file that provisions test namespaces and server/client pods (agnhost/netexec), builds NetworkPolicy objects (defaultDeny/allowIngress/allowEgress), runs connectivity assertions across ports and cross-namespace scenarios, and includes extensive pod lifecycle and policy-builder helpers.
Test Runner Suite Registration
cmd/cluster-authentication-operator-tests-ext/main.go
Adds a second test-suite registration selecting non-serial tests (Parallelism 1) with specific tags, in addition to the existing serial suite.
Module Dependency
go.mod
Promotes github.com/onsi/gomega to a direct dependency (moved from indirect to direct in the require block).

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

No actionable comments were generated in the recent review. 🎉


Comment @coderabbitai help to get the list of available commands and usage tips.

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Feb 2, 2026

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: gangwgr
Once this PR has been reviewed and has the lgtm label, please assign liouk for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@gangwgr gangwgr force-pushed the network-policy-e2e branch 2 times, most recently from d51d10b to 68ac2ee Compare February 3, 2026 11:28
@gangwgr gangwgr marked this pull request as ready for review February 3, 2026 11:29
@openshift-ci openshift-ci bot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Feb 3, 2026
@openshift-ci openshift-ci bot requested review from ibihim and liouk February 3, 2026 11:29
@gangwgr gangwgr force-pushed the network-policy-e2e branch from 68ac2ee to 6c9a415 Compare February 3, 2026 11:41
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Fix all issues with AI agents
In `@test/e2e/network_policy.go`:
- Around line 466-477: In deleteAndWaitForNetworkPolicy, capture the
NetworkPolicy's original UID via
kubeClient.NetworkingV1().NetworkPolicies(namespace).Get(...) before deleting,
then perform the Delete and poll until either Get returns NotFound or the
returned object's UID differs from the original UID to ensure the
deletion/reconciliation was observed; also surface any error returned by Delete
instead of discarding it. Use the unique function name
deleteAndWaitForNetworkPolicy and references to
NetworkingV1().NetworkPolicies(...).Get/Delete when locating where to add the
UID capture and adjust the wait condition.
- Around line 82-100: The test can match pre-existing events because
waitForNetworkPolicyEvent filters only by reason/message; fix by capturing a
"notBefore" timestamp immediately before calling deleteAndWaitForNetworkPolicy
in testNetworkPolicyEvents and modify waitForNetworkPolicyEvent to accept a
metav1.Time (or time.Time) notBefore parameter and ignore events with
EventTime/LastTimestamp <= notBefore, then update both calls in
testNetworkPolicyEvents to pass that notBefore value; ensure the event
comparison uses the proper Kubernetes event timestamp fields (EventTime if set,
otherwise LastTimestamp) when filtering.

Comment on lines 82 to 100
func testNetworkPolicyEvents() {
kubeConfig := e2e.NewClientConfigForTest(g.GinkgoTB())
kubeClient, err := kubernetes.NewForConfig(kubeConfig)
o.Expect(err).NotTo(o.HaveOccurred())
configClient, err := configclient.NewForConfig(kubeConfig)
o.Expect(err).NotTo(o.HaveOccurred())

g.By("Waiting for authentication clusteroperator to be stable")
err = e2e.WaitForClusterOperatorAvailableNotProgressingNotDegraded(g.GinkgoTB(), configClient.ConfigV1(), "authentication")
o.Expect(err).NotTo(o.HaveOccurred())

g.By("Deleting oauth-server NetworkPolicy and verifying event emission")
deleteAndWaitForNetworkPolicy(kubeClient, "openshift-authentication", "oauth-server-networkpolicy")
waitForNetworkPolicyEvent(kubeClient, "openshift-authentication-operator", "NetworkPolicyCreated", "openshift-authentication", "oauth-server-networkpolicy")

g.By("Deleting oauth-apiserver NetworkPolicy and verifying event emission")
deleteAndWaitForNetworkPolicy(kubeClient, "openshift-oauth-apiserver", "oauth-apiserver-networkpolicy")
waitForNetworkPolicyEvent(kubeClient, "openshift-authentication-operator", "NetworkPolicyCreated", "openshift-oauth-apiserver", "oauth-apiserver-networkpolicy")
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

cd test/e2e && wc -l network_policy.go

Repository: openshift/cluster-authentication-operator

Length of output: 106


🏁 Script executed:

cd test/e2e && cat -n network_policy.go | head -150

Repository: openshift/cluster-authentication-operator

Length of output: 7674


🏁 Script executed:

# Search for waitForNetworkPolicyEvent function definition
rg -n "func waitForNetworkPolicyEvent" test/e2e/

Repository: openshift/cluster-authentication-operator

Length of output: 241


🏁 Script executed:

# Search for how events are being listed/queried
rg -n "ListOptions\|FieldSelector\|creationTimestamp" test/e2e/network_policy.go -A 5 -B 2

Repository: openshift/cluster-authentication-operator

Length of output: 67


🏁 Script executed:

cd test/e2e && sed -n '492,546p' network_policy.go

Repository: openshift/cluster-authentication-operator

Length of output: 1839


🏁 Script executed:

# Verify event.CreationTimestamp field exists in corev1.Event
rg -A 5 "type Event struct" --type go

Repository: openshift/cluster-authentication-operator

Length of output: 4858


🏁 Script executed:

# Look at corev1.Event structure more carefully for CreationTimestamp field
rg -A 30 "^type Event struct" vendor/k8s.io/api/core/v1/types.go | head -40

Repository: openshift/cluster-authentication-operator

Length of output: 1559


🏁 Script executed:

# Check if metav1.ObjectMeta has CreationTimestamp field
rg -A 20 "^type ObjectMeta struct" vendor/k8s.io/apimachinery/pkg/apis/meta/v1/types.go | head -25

Repository: openshift/cluster-authentication-operator

Length of output: 1207


🏁 Script executed:

# Search for CreationTimestamp field in ObjectMeta
rg "CreationTimestamp" vendor/k8s.io/apimachinery/pkg/apis/meta/v1/types.go -A 2 -B 2

Repository: openshift/cluster-authentication-operator

Length of output: 776


Guard against matching pre-existing events.

Listing all events without a "not-before" boundary can match older NetworkPolicyCreated events and yield false positives. The waitForNetworkPolicyEvent function at line 492 filters only by reason and message content, with no timestamp checking. Capture a timestamp before deletion and require events created after it.

Proposed fix
 g.By("Deleting oauth-server NetworkPolicy and verifying event emission")
-deleteAndWaitForNetworkPolicy(kubeClient, "openshift-authentication", "oauth-server-networkpolicy")
-waitForNetworkPolicyEvent(kubeClient, "openshift-authentication-operator", "NetworkPolicyCreated", "openshift-authentication", "oauth-server-networkpolicy")
+start := time.Now()
+deleteAndWaitForNetworkPolicy(kubeClient, "openshift-authentication", "oauth-server-networkpolicy")
+waitForNetworkPolicyEvent(kubeClient, "openshift-authentication-operator", "NetworkPolicyCreated", "openshift-authentication", "oauth-server-networkpolicy", start)

 g.By("Deleting oauth-apiserver NetworkPolicy and verifying event emission")
-deleteAndWaitForNetworkPolicy(kubeClient, "openshift-oauth-apiserver", "oauth-apiserver-networkpolicy")
-waitForNetworkPolicyEvent(kubeClient, "openshift-authentication-operator", "NetworkPolicyCreated", "openshift-oauth-apiserver", "oauth-apiserver-networkpolicy")
+start = time.Now()
+deleteAndWaitForNetworkPolicy(kubeClient, "openshift-oauth-apiserver", "oauth-apiserver-networkpolicy")
+waitForNetworkPolicyEvent(kubeClient, "openshift-authentication-operator", "NetworkPolicyCreated", "openshift-oauth-apiserver", "oauth-apiserver-networkpolicy", start)
-func waitForNetworkPolicyEvent(kubeClient kubernetes.Interface, eventNamespace, reason, policyNamespace, policyName string) {
+func waitForNetworkPolicyEvent(kubeClient kubernetes.Interface, eventNamespace, reason, policyNamespace, policyName string, notBefore time.Time) {
 	g.GinkgoHelper()
 
 	g.GinkgoWriter.Printf("waiting for event %s in %s for networkpolicy %s/%s\n", reason, eventNamespace, policyNamespace, policyName)
@@ -504,7 +504,8 @@ func waitForNetworkPolicyEvent(kubeClient kubernetes.Interface, eventNamespace,
 		for _, event := range events.Items {
 			if event.Reason != reason {
 				continue
 			}
-			if strings.Contains(event.Message, policyName) && strings.Contains(event.Message, policyNamespace) {
+			if strings.Contains(event.Message, policyName) && strings.Contains(event.Message, policyNamespace) &&
+				event.CreationTimestamp.Time.After(notBefore) {
 				return true, nil
 			}
 		}
🤖 Prompt for AI Agents
In `@test/e2e/network_policy.go` around lines 82 - 100, The test can match
pre-existing events because waitForNetworkPolicyEvent filters only by
reason/message; fix by capturing a "notBefore" timestamp immediately before
calling deleteAndWaitForNetworkPolicy in testNetworkPolicyEvents and modify
waitForNetworkPolicyEvent to accept a metav1.Time (or time.Time) notBefore
parameter and ignore events with EventTime/LastTimestamp <= notBefore, then
update both calls in testNetworkPolicyEvents to pass that notBefore value;
ensure the event comparison uses the proper Kubernetes event timestamp fields
(EventTime if set, otherwise LastTimestamp) when filtering.

Comment on lines 466 to 477
func deleteAndWaitForNetworkPolicy(kubeClient kubernetes.Interface, namespace, name string) {
g.GinkgoHelper()

g.GinkgoWriter.Printf("deleting networkpolicy %s/%s\n", namespace, name)
_ = kubeClient.NetworkingV1().NetworkPolicies(namespace).Delete(context.TODO(), name, metav1.DeleteOptions{})
err := wait.PollImmediate(2*time.Second, 10*time.Minute, func() (bool, error) {
_, err := kubeClient.NetworkingV1().NetworkPolicies(namespace).Get(context.TODO(), name, metav1.GetOptions{})
return err == nil, nil
})
o.Expect(err).NotTo(o.HaveOccurred())
g.GinkgoWriter.Printf("networkpolicy %s/%s restored\n", namespace, name)
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# First, let's verify the file exists and check the code at lines 466-477
wc -l test/e2e/network_policy.go

Repository: openshift/cluster-authentication-operator

Length of output: 115


🏁 Script executed:

# Read the specific lines to confirm they match the snippet
sed -n '466,477p' test/e2e/network_policy.go

Repository: openshift/cluster-authentication-operator

Length of output: 719


🏁 Script executed:

# Look for similar delete/wait patterns in the codebase to see how they handle reconciliation
rg -A 8 "wait\.PollImmediate.*Delete" test/ --type go

Repository: openshift/cluster-authentication-operator

Length of output: 67


🏁 Script executed:

# Search for other instances of this function or similar patterns checking UIDs
rg "\.UID" test/e2e/ --type go -B 2 -A 2

Repository: openshift/cluster-authentication-operator

Length of output: 1023


Wait for actual deletion by tracking UID changes.

The poll exits when Get succeeds, but the resource may not have been deleted yet. If the original policy still exists or is immediately recreated, Get succeeds without confirming reconciliation occurred. Store the original UID before deletion and wait for it to change to ensure the delete was actually observed.

Proposed fix
 g.GinkgoWriter.Printf("deleting networkpolicy %s/%s\n", namespace, name)
-_ = kubeClient.NetworkingV1().NetworkPolicies(namespace).Delete(context.TODO(), name, metav1.DeleteOptions{})
-err := wait.PollImmediate(2*time.Second, 10*time.Minute, func() (bool, error) {
-	_, err := kubeClient.NetworkingV1().NetworkPolicies(namespace).Get(context.TODO(), name, metav1.GetOptions{})
-	return err == nil, nil
-})
+original, err := kubeClient.NetworkingV1().NetworkPolicies(namespace).Get(context.TODO(), name, metav1.GetOptions{})
+o.Expect(err).NotTo(o.HaveOccurred())
+originalUID := original.UID
+
+_ = kubeClient.NetworkingV1().NetworkPolicies(namespace).Delete(context.TODO(), name, metav1.DeleteOptions{})
+err = wait.PollImmediate(2*time.Second, 10*time.Minute, func() (bool, error) {
+	policy, err := kubeClient.NetworkingV1().NetworkPolicies(namespace).Get(context.TODO(), name, metav1.GetOptions{})
+	if err != nil {
+		return false, nil
+	}
+	return policy.UID != originalUID, nil
+})
🤖 Prompt for AI Agents
In `@test/e2e/network_policy.go` around lines 466 - 477, In
deleteAndWaitForNetworkPolicy, capture the NetworkPolicy's original UID via
kubeClient.NetworkingV1().NetworkPolicies(namespace).Get(...) before deleting,
then perform the Delete and poll until either Get returns NotFound or the
returned object's UID differs from the original UID to ensure the
deletion/reconciliation was observed; also surface any error returned by Delete
instead of discarding it. Use the unique function name
deleteAndWaitForNetworkPolicy and references to
NetworkingV1().NetworkPolicies(...).Get/Delete when locating where to add the
UID capture and adjust the wait condition.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@test/e2e/network_policy_enforcement.go`:
- Around line 294-299: The code reads
completed.Status.ContainerStatuses[0].State.Terminated.ExitCode without checking
for a nil Terminated and can panic; update the pod-completion check (the block
that inspects completed.Status.ContainerStatuses in the function that calls
waitForPodCompletion) to verify completed.Status.ContainerStatuses is non-empty
and that completed.Status.ContainerStatuses[0].State.Terminated != nil before
accessing ExitCode, and if Terminated is nil return a descriptive error (e.g.,
"no terminated state recorded for pod <name>") or retry/handle as appropriate so
you never dereference a nil Terminated.
🧹 Nitpick comments (1)
test/e2e/network_policy_enforcement.go (1)

302-328: Consider consolidating wait helpers.

waitForPodReady and waitForPodCompletion are similar to helpers in network_policy.go. Consider extracting these to test/library/waits.go to reduce duplication across test files.

Comment on lines +294 to +299
if len(completed.Status.ContainerStatuses) == 0 {
return false, fmt.Errorf("no container status recorded for pod %s", name)
}
exitCode := completed.Status.ContainerStatuses[0].State.Terminated.ExitCode
g.GinkgoWriter.Printf("client pod %s/%s exitCode=%d\n", namespace, name, exitCode)
return exitCode == 0, nil
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Guard against nil Terminated state.

After waitForPodCompletion, the container's State.Terminated field might still be nil in edge cases (e.g., race with kubelet status reporting). Accessing Terminated.ExitCode without a nil check will panic.

Proposed fix
 	if len(completed.Status.ContainerStatuses) == 0 {
 		return false, fmt.Errorf("no container status recorded for pod %s", name)
 	}
+	if completed.Status.ContainerStatuses[0].State.Terminated == nil {
+		return false, fmt.Errorf("container state not terminated for pod %s", name)
+	}
 	exitCode := completed.Status.ContainerStatuses[0].State.Terminated.ExitCode
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if len(completed.Status.ContainerStatuses) == 0 {
return false, fmt.Errorf("no container status recorded for pod %s", name)
}
exitCode := completed.Status.ContainerStatuses[0].State.Terminated.ExitCode
g.GinkgoWriter.Printf("client pod %s/%s exitCode=%d\n", namespace, name, exitCode)
return exitCode == 0, nil
if len(completed.Status.ContainerStatuses) == 0 {
return false, fmt.Errorf("no container status recorded for pod %s", name)
}
if completed.Status.ContainerStatuses[0].State.Terminated == nil {
return false, fmt.Errorf("container state not terminated for pod %s", name)
}
exitCode := completed.Status.ContainerStatuses[0].State.Terminated.ExitCode
g.GinkgoWriter.Printf("client pod %s/%s exitCode=%d\n", namespace, name, exitCode)
return exitCode == 0, nil
🤖 Prompt for AI Agents
In `@test/e2e/network_policy_enforcement.go` around lines 294 - 299, The code
reads completed.Status.ContainerStatuses[0].State.Terminated.ExitCode without
checking for a nil Terminated and can panic; update the pod-completion check
(the block that inspects completed.Status.ContainerStatuses in the function that
calls waitForPodCompletion) to verify completed.Status.ContainerStatuses is
non-empty and that completed.Status.ContainerStatuses[0].State.Terminated != nil
before accessing ExitCode, and if Terminated is nil return a descriptive error
(e.g., "no terminated state recorded for pod <name>") or retry/handle as
appropriate so you never dereference a nil Terminated.

@gangwgr gangwgr force-pushed the network-policy-e2e branch from 6c9a415 to 93d8f5d Compare February 5, 2026 07:20
@gangwgr
Copy link
Contributor Author

gangwgr commented Feb 5, 2026

/payload 4.22 nightly informing
/payload 4.22 nightly blocking

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Feb 5, 2026

@gangwgr: trigger 68 job(s) of type informing for the nightly release of OCP 4.22

  • periodic-ci-openshift-release-master-nightly-4.22-e2e-agent-compact-fips
  • periodic-ci-openshift-release-master-nightly-4.22-e2e-agent-ha-dualstack-conformance
  • periodic-ci-openshift-release-master-nightly-4.22-e2e-agent-single-node-ipv6-conformance
  • periodic-ci-openshift-hypershift-release-4.22-periodics-e2e-azure-aks-ovn-conformance
  • periodic-ci-openshift-release-master-nightly-4.22-console-aws
  • periodic-ci-openshift-cluster-control-plane-machine-set-operator-release-4.22-periodics-e2e-aws
  • periodic-ci-openshift-release-master-nightly-4.22-e2e-aws-csi
  • periodic-ci-openshift-release-master-ci-4.22-e2e-aws-ovn
  • periodic-ci-openshift-release-master-nightly-4.22-e2e-aws-ovn-cgroupsv2
  • periodic-ci-openshift-release-master-nightly-4.22-e2e-aws-ovn-fips
  • periodic-ci-openshift-release-master-nightly-4.22-e2e-aws-ovn-single-node
  • periodic-ci-openshift-release-master-nightly-4.22-e2e-aws-ovn-single-node-csi
  • periodic-ci-openshift-release-master-nightly-4.22-e2e-aws-ovn-single-node-serial
  • periodic-ci-openshift-release-master-nightly-4.22-e2e-aws-ovn-single-node-techpreview
  • periodic-ci-openshift-release-master-nightly-4.22-e2e-aws-ovn-single-node-techpreview-serial
  • periodic-ci-openshift-release-master-nightly-4.22-upgrade-from-stable-4.21-e2e-aws-upgrade-ovn-single-node
  • periodic-ci-openshift-release-master-ci-4.22-e2e-aws-ovn-upgrade-out-of-change
  • periodic-ci-openshift-release-master-nightly-4.22-e2e-aws-ovn-upi
  • periodic-ci-openshift-cluster-control-plane-machine-set-operator-release-4.22-periodics-e2e-azure
  • periodic-ci-openshift-release-master-nightly-4.22-e2e-azure-csi
  • periodic-ci-openshift-release-master-ci-4.22-e2e-azure-ovn
  • periodic-ci-openshift-release-master-ci-4.22-e2e-azure-ovn-serial
  • periodic-ci-openshift-release-master-ci-4.22-e2e-azure-ovn-techpreview
  • periodic-ci-openshift-release-master-ci-4.22-e2e-azure-ovn-techpreview-serial
  • periodic-ci-openshift-release-master-ci-4.22-e2e-azure-ovn-upgrade-out-of-change
  • periodic-ci-openshift-release-master-cnv-nightly-4.22-deploy-azure-kubevirt-ovn
  • periodic-ci-openshift-cluster-control-plane-machine-set-operator-release-4.22-periodics-e2e-gcp
  • periodic-ci-openshift-release-master-ci-4.22-e2e-gcp-ovn
  • periodic-ci-openshift-release-master-nightly-4.22-e2e-gcp-ovn-csi
  • periodic-ci-openshift-release-master-nightly-4.22-e2e-gcp-ovn-rt
  • periodic-ci-openshift-release-master-nightly-4.22-e2e-gcp-ovn-serial
  • periodic-ci-openshift-release-master-ci-4.22-e2e-gcp-ovn-techpreview
  • periodic-ci-openshift-release-master-ci-4.22-e2e-gcp-ovn-techpreview-serial
  • periodic-ci-openshift-release-master-ci-4.22-upgrade-from-stable-4.21-e2e-gcp-ovn-upgrade
  • periodic-ci-openshift-release-master-ci-4.22-e2e-gcp-ovn-upgrade
  • periodic-ci-openshift-hypershift-release-4.22-periodics-e2e-azure-kubevirt-ovn
  • periodic-ci-openshift-release-master-nightly-4.22-e2e-metal-ipi-ovn-dualstack
  • periodic-ci-openshift-release-master-nightly-4.22-e2e-metal-ipi-ovn-dualstack-techpreview
  • periodic-ci-openshift-release-master-nightly-4.22-e2e-metal-ipi-ovn-ipv6-techpreview
  • periodic-ci-openshift-release-master-nightly-4.22-e2e-metal-ipi-ovn-serial-ipv4
  • periodic-ci-openshift-release-master-nightly-4.22-e2e-metal-ipi-ovn-serial-virtualmedia-1of2
  • periodic-ci-openshift-release-master-nightly-4.22-e2e-metal-ipi-ovn-serial-virtualmedia-2of2
  • periodic-ci-openshift-release-master-nightly-4.22-e2e-metal-ipi-ovn-techpreview
  • periodic-ci-openshift-release-master-nightly-4.22-e2e-metal-ipi-ovn-upgrade
  • periodic-ci-openshift-release-master-nightly-4.22-upgrade-from-stable-4.21-e2e-metal-ipi-ovn-upgrade
  • periodic-ci-openshift-release-master-nightly-4.22-e2e-metal-ipi-serial-ovn-ipv6
  • periodic-ci-openshift-release-master-nightly-4.22-e2e-metal-ipi-serial-ovn-dualstack
  • periodic-ci-openshift-release-master-nightly-4.22-e2e-metal-ipi-upgrade-ovn-ipv6
  • periodic-ci-openshift-release-master-nightly-4.22-upgrade-from-stable-4.21-e2e-metal-ipi-upgrade-ovn-ipv6
  • periodic-ci-openshift-release-master-nightly-4.22-metal-ovn-single-node-recert-cluster-rename
  • periodic-ci-openshift-osde2e-main-nightly-4.22-osd-aws
  • periodic-ci-openshift-release-master-nightly-4.22-e2e-osd-ccs-gcp
  • periodic-ci-openshift-osde2e-main-nightly-4.22-osd-gcp
  • periodic-ci-openshift-release-master-nightly-4.22-e2e-aws-ovn-proxy
  • periodic-ci-openshift-release-master-nightly-4.22-e2e-metal-ovn-single-node-live-iso
  • periodic-ci-openshift-eng-ocp-qe-perfscale-ci-main-aws-4.22-nightly-x86-payload-control-plane-6nodes
  • periodic-ci-openshift-release-master-nightly-4.22-e2e-telco5g
  • periodic-ci-openshift-release-master-ci-4.22-upgrade-from-stable-4.21-e2e-aws-ovn-upgrade
  • periodic-ci-openshift-release-master-nightly-4.22-e2e-vsphere-ovn
  • periodic-ci-openshift-release-master-nightly-4.22-e2e-vsphere-ovn-csi
  • periodic-ci-openshift-release-master-nightly-4.22-e2e-vsphere-ovn-serial
  • periodic-ci-openshift-release-master-nightly-4.22-e2e-vsphere-ovn-techpreview
  • periodic-ci-openshift-release-master-nightly-4.22-e2e-vsphere-ovn-techpreview-serial
  • periodic-ci-openshift-release-master-ci-4.22-e2e-vsphere-ovn-upgrade
  • periodic-ci-openshift-release-master-ci-4.22-upgrade-from-stable-4.21-e2e-vsphere-ovn-upgrade
  • periodic-ci-openshift-release-master-nightly-4.22-e2e-vsphere-ovn-upi
  • periodic-ci-openshift-release-master-nightly-4.22-e2e-vsphere-ovn-upi-serial
  • periodic-ci-openshift-release-master-nightly-4.22-e2e-vsphere-static-ovn

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/9fabeea0-028f-11f1-821b-315b0366a22d-0

trigger 14 job(s) of type blocking for the nightly release of OCP 4.22

  • periodic-ci-openshift-release-master-ci-4.22-e2e-aws-upgrade-ovn-single-node
  • periodic-ci-openshift-release-master-nightly-4.22-e2e-aws-ovn-upgrade-fips
  • periodic-ci-openshift-release-master-ci-4.22-e2e-azure-ovn-upgrade
  • periodic-ci-openshift-release-master-ci-4.22-upgrade-from-stable-4.21-e2e-gcp-ovn-rt-upgrade
  • periodic-ci-openshift-hypershift-release-4.22-periodics-e2e-aws-ovn-conformance
  • periodic-ci-openshift-release-master-nightly-4.22-e2e-aws-ovn-serial-1of2
  • periodic-ci-openshift-release-master-nightly-4.22-e2e-aws-ovn-serial-2of2
  • periodic-ci-openshift-release-master-ci-4.22-e2e-aws-ovn-techpreview
  • periodic-ci-openshift-release-master-ci-4.22-e2e-aws-ovn-techpreview-serial-1of3
  • periodic-ci-openshift-release-master-ci-4.22-e2e-aws-ovn-techpreview-serial-2of3
  • periodic-ci-openshift-release-master-ci-4.22-e2e-aws-ovn-techpreview-serial-3of3
  • periodic-ci-openshift-release-master-nightly-4.22-e2e-aws-ovn-upgrade-fips-no-nat-instance
  • periodic-ci-openshift-release-master-nightly-4.22-e2e-metal-ipi-ovn-ipv4
  • periodic-ci-openshift-release-master-nightly-4.22-e2e-metal-ipi-ovn-ipv6

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/9fabeea0-028f-11f1-821b-315b0366a22d-1

@gangwgr
Copy link
Contributor Author

gangwgr commented Feb 5, 2026

/retitle CNTRLPLANE-2610: Add Auth network policy e2e cases

@openshift-ci openshift-ci bot changed the title Auth network policy e2e cases CNTRLPLANE-2610: Add Auth network policy e2e cases Feb 5, 2026
@openshift-ci-robot openshift-ci-robot added the jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. label Feb 5, 2026
@openshift-ci-robot
Copy link
Contributor

openshift-ci-robot commented Feb 5, 2026

@gangwgr: This pull request references CNTRLPLANE-2610 which is a valid jira issue.

Details

In response to this:

Auth network policy e2e cases

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@gangwgr
Copy link
Contributor Author

gangwgr commented Feb 5, 2026

/retitle CNTRLPLANE-2625:Add Auth network policy e2e cases

@openshift-ci openshift-ci bot changed the title CNTRLPLANE-2610: Add Auth network policy e2e cases CNTRLPLANE-2625:Add Auth network policy e2e cases Feb 5, 2026
@openshift-ci-robot
Copy link
Contributor

openshift-ci-robot commented Feb 5, 2026

@gangwgr: This pull request references CNTRLPLANE-2625 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.22.0" version, but no target version was set.

Details

In response to this:

Auth network policy e2e cases

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@gangwgr
Copy link
Contributor Author

gangwgr commented Feb 5, 2026

/testwith openshift/cluster-authentication-operator/main/e2e-operator #825

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Feb 5, 2026

@gangwgr, testwith: could not generate prow job. ERROR:

could not determine ci op config from metadata: got unexpected http 404 status code from configresolver: failed to get config: could not find any config for branch main on repo openshift/cluster-authentication-operator

@gangwgr
Copy link
Contributor Author

gangwgr commented Feb 5, 2026

/testwith openshift/cluster-authentication-operator/master/e2e-aws-external-oidc-conformance-serial #825

@gangwgr
Copy link
Contributor Author

gangwgr commented Feb 5, 2026

/testwith openshift/cluster-kube-apiserver-operator/main/e2e-gcp-operator-serial-ote #825

@gangwgr
Copy link
Contributor Author

gangwgr commented Feb 5, 2026

/hold
wait for ci to passed

@gangwgr
Copy link
Contributor Author

gangwgr commented Feb 6, 2026

/testwith openshift/cluster-authentication-operator/master/e2e-aws-operator-parallel-ote #825

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@test/e2e/network_policy.go`:
- Around line 194-204: The hasPort function treats a nil
NetworkPolicyPort.Protocol as a wildcard; change it to treat nil as TCP-only by
checking p.Protocol explicitly: if p.Protocol == nil then only return true when
the requested protocol equals corev1.ProtocolTCP, otherwise when p.Protocol !=
nil compare *p.Protocol == protocol; keep the existing port IntValue check and
the function name hasPort to locate the change.

@gangwgr
Copy link
Contributor Author

gangwgr commented Feb 6, 2026

/testwith openshift/cluster-authentication-operator/master/e2e-aws-operator-parallel-ote #825

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@test/e2e/network_policy.go`:
- Around line 131-161: The test testAuthNetworkPolicyReconcile in
test/e2e/network_policy.go must be marked to run serially: update the Ginkgo
It/Describe titles where testAuthNetworkPolicyReconcile is registered (the two
test name occurrences referenced in the review) to include the “[Serial]” tag
(e.g., prefix or append “[Serial]” to the test name strings) so the suite
enforces Parallelism: 1; alternatively, ensure this same test is only executed
in a serial-only job if you prefer not to modify the name.

@gangwgr gangwgr force-pushed the network-policy-e2e branch from e0fa194 to ad79e58 Compare February 6, 2026 12:11
Comment on lines 136 to 137
g.GinkgoWriter.Printf("deleting NetworkPolicy %s/%s\n", "openshift-authentication-operator", "authentication-operator-networkpolicy")
restoreNetworkPolicy(ctx, kubeClient, "openshift-authentication-operator", "authentication-operator-networkpolicy")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe define constants for "openshift-authentication-operator" and "authentication-operator-networkpolicy" for consistency with the other checks.

Comment on lines 144 to 145
g.GinkgoWriter.Printf("deleting NetworkPolicy %s/%s\n", "openshift-authentication-operator", defaultDenyAllPolicyName)
restoreNetworkPolicy(ctx, kubeClient, "openshift-authentication-operator", defaultDenyAllPolicyName)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here, define a constant for the namespace. I see this pattern below as well, so this comment stands for all occurrences.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@cmd/cluster-authentication-operator-tests-ext/main.go`:
- Around line 72-81: The test suite registration duplicates the Name
"openshift/cluster-authentication-operator/operator/serial" and uses a qualifier
that excludes "[Serial]" instead of selecting "[Parallel]"; update the second
extension.AddSuite call (oteextension.Suite) to use a distinct Name (e.g.,
".../operator/parallel") and change its Qualifiers expression to explicitly
include tests tagged with "[Parallel]" (for example by replacing
`!name.contains("[Serial]") && (...)` with `name.contains("[Parallel]") &&
(...)` or an equivalent that matches [Parallel] tests); ensure Parallelism
remains appropriate for the new suite.
🧹 Nitpick comments (1)
test/e2e/network_policy.go (1)

162-163: Use authOperatorNamespace constant for consistency.

Line 163 uses the hardcoded string "openshift-authentication-operator" while line 147 uses the authOperatorNamespace constant for the same namespace.

Suggested fix
 	g.GinkgoWriter.Printf("mutating NetworkPolicy %s/%s\n", "openshift-authentication-operator", defaultDenyAllPolicyName)
-	mutateAndRestoreNetworkPolicy(ctx, kubeClient, "openshift-authentication-operator", defaultDenyAllPolicyName)
+	mutateAndRestoreNetworkPolicy(ctx, kubeClient, authOperatorNamespace, defaultDenyAllPolicyName)

Comment on lines 72 to 81
// The following suite runs tests that verify the operator's behaviour.
// This suite is executed only on pull requests targeting this repository.
// Tests tagged with [Parallel] and any of [Operator], [OIDC], [Templates], [Tokens] are included in this suite.
extension.AddSuite(oteextension.Suite{
Name: "openshift/cluster-authentication-operator/operator/serial",
Parallelism: 1,
Qualifiers: []string{
`!name.contains("[Serial]") && (name.contains("[Operator]") || name.contains("[OIDC]") || name.contains("[Templates]") || name.contains("[Tokens]"))`,
},
})
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Duplicate suite name will cause registration conflicts.

Both suites at lines 64-70 and lines 75-81 share the same name "openshift/cluster-authentication-operator/operator/serial". This will likely cause one suite to overwrite the other in the registry.

Additionally, the comment mentions [Parallel] but the qualifier !name.contains("[Serial]") filters for tests without [Serial], which is not the same as tests explicitly tagged with [Parallel].

Proposed fix
 	// The following suite runs tests that verify the operator's behaviour.
 	// This suite is executed only on pull requests targeting this repository.
-	// Tests tagged with [Parallel] and any of [Operator], [OIDC], [Templates], [Tokens] are included in this suite.
+	// Tests NOT tagged with [Serial] but tagged with any of [Operator], [OIDC], [Templates], [Tokens] are included in this suite.
 	extension.AddSuite(oteextension.Suite{
-		Name:        "openshift/cluster-authentication-operator/operator/serial",
+		Name:        "openshift/cluster-authentication-operator/operator/parallel",
 		Parallelism: 1,
 		Qualifiers: []string{
 			`!name.contains("[Serial]") && (name.contains("[Operator]") || name.contains("[OIDC]") || name.contains("[Templates]") || name.contains("[Tokens]"))`,
 		},
 	})
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// The following suite runs tests that verify the operator's behaviour.
// This suite is executed only on pull requests targeting this repository.
// Tests tagged with [Parallel] and any of [Operator], [OIDC], [Templates], [Tokens] are included in this suite.
extension.AddSuite(oteextension.Suite{
Name: "openshift/cluster-authentication-operator/operator/serial",
Parallelism: 1,
Qualifiers: []string{
`!name.contains("[Serial]") && (name.contains("[Operator]") || name.contains("[OIDC]") || name.contains("[Templates]") || name.contains("[Tokens]"))`,
},
})
// The following suite runs tests that verify the operator's behaviour.
// This suite is executed only on pull requests targeting this repository.
// Tests NOT tagged with [Serial] but tagged with any of [Operator], [OIDC], [Templates], [Tokens] are included in this suite.
extension.AddSuite(oteextension.Suite{
Name: "openshift/cluster-authentication-operator/operator/parallel",
Parallelism: 1,
Qualifiers: []string{
`!name.contains("[Serial]") && (name.contains("[Operator]") || name.contains("[OIDC]") || name.contains("[Templates]") || name.contains("[Tokens]"))`,
},
})
🤖 Prompt for AI Agents
In `@cmd/cluster-authentication-operator-tests-ext/main.go` around lines 72 - 81,
The test suite registration duplicates the Name
"openshift/cluster-authentication-operator/operator/serial" and uses a qualifier
that excludes "[Serial]" instead of selecting "[Parallel]"; update the second
extension.AddSuite call (oteextension.Suite) to use a distinct Name (e.g.,
".../operator/parallel") and change its Qualifiers expression to explicitly
include tests tagged with "[Parallel]" (for example by replacing
`!name.contains("[Serial]") && (...)` with `name.contains("[Parallel]") &&
(...)` or an equivalent that matches [Parallel] tests); ensure Parallelism
remains appropriate for the new suite.

@gangwgr
Copy link
Contributor Author

gangwgr commented Feb 6, 2026

/testwith openshift/cluster-authentication-operator/master/e2e-aws-operator-parallel-ote #825

@gangwgr gangwgr force-pushed the network-policy-e2e branch from 7a197a9 to 3744687 Compare February 6, 2026 16:08
@gangwgr
Copy link
Contributor Author

gangwgr commented Feb 6, 2026

/testwith openshift/cluster-authentication-operator/master/e2e-aws-operator-parallel-ote #825

1 similar comment
@gangwgr
Copy link
Contributor Author

gangwgr commented Feb 8, 2026

/testwith openshift/cluster-authentication-operator/master/e2e-aws-operator-parallel-ote #825

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Feb 8, 2026

@gangwgr: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/e2e-aws-operator-serial-ote 7da46bc link false /test e2e-aws-operator-serial-ote
ci/prow/e2e-aws-operator-parallel-ote 7da46bc link false /test e2e-aws-operator-parallel-ote

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@gangwgr
Copy link
Contributor Author

gangwgr commented Feb 9, 2026

/testwith openshift/cluster-authentication-operator/master/e2e-aws-operator-parallel-ote #825

2 similar comments
@gangwgr
Copy link
Contributor Author

gangwgr commented Feb 9, 2026

/testwith openshift/cluster-authentication-operator/master/e2e-aws-operator-parallel-ote #825

@gangwgr
Copy link
Contributor Author

gangwgr commented Feb 11, 2026

/testwith openshift/cluster-authentication-operator/master/e2e-aws-operator-parallel-ote #825

@gangwgr
Copy link
Contributor Author

gangwgr commented Feb 12, 2026

/testwith openshift/cluster-authentication-operator/master/e2e-aws-operator-parallel-ote #825

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

jira/valid-reference Indicates that this PR references a valid Jira ticket of any type.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants