fix: Add resourceclaims/binding RBAC for DRA granular status authorization#1372
fix: Add resourceclaims/binding RBAC for DRA granular status authorization#1372praveen0raj wants to merge 1 commit intokai-scheduler:mainfrom
Conversation
|
Important Review skippedAuto reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the ⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro Run ID: You can disable this status message by setting the Use the checkbox below for a quick retry:
✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
@praveen0raj what would happen if we use this rbac on k8s v1.31 for example? Will it fail? Should we guard this with some "if k8s version is >=1.36"? |
Merging this branch will not change overall coverage
Coverage by fileChanged files (no unit tests)
Please note that the "Total", "Covered", and "Missed" counts above refer to code statements instead of lines of code. The value in brackets refers to the test coverage of that file in the old version of the code. |
|
No, it won't fail, and no version guard is needed. Kubernetes RBAC rules are declarative and permissive — a ClusterRole can reference resources that don't exist on the cluster. The API server simply ignores rules for unknown resources/subresources. It won't error when you apply the manifest, and it won't cause any runtime issues. So on v1.31:
On v1.36+:
This is the same pattern used throughout the Kubernetes ecosystem — projects routinely ship RBAC for newer API features without version guards. Adding conditional logic would actually be worse: it would add Helm template complexity for zero practical benefit. |
|
@praveen0raj Thank you for explaining that. |
Starting in Kubernetes v1.36, the DRAResourceClaimGranularStatusAuthorization feature gate (beta, on-by-default) requires additional permissions on the synthetic resourceclaims/binding subresource for components that modify status.allocation and status.reservedFor on ResourceClaims. KAI-Scheduler's binder modifies these fields when binding claims to nodes, so it needs resourceclaims/binding with update and patch verbs. Updated both the kubebuilder RBAC marker in the source code and the auto-generated ClusterRole manifest. Ref: kubernetes/kubernetes#138149 Signed-off-by: praveen0raj <praveen0raj@gmail.com>
86974b7 to
e482447
Compare
|
@enoodle I have made changes. Thanks. |
Summary
resourceclaims/bindingwithupdateandpatchverbs to the binder ClusterRoleWhy
Starting in Kubernetes v1.36, modifying
status.allocationandstatus.reservedForon a ResourceClaim requires additional permission on the syntheticresourceclaims/bindingsubresource. Without this change, KAI-Scheduler's binder will get 403 Forbidden when binding claims to nodes.The existing
resourceclaims/statuspermission is still required and kept as-is.Files changed
pkg/binder/controllers/bindrequest_controller.go— added+kubebuilder:rbacmarkerdeployments/kai-scheduler/templates/rbac/binder.yaml— added generated RBAC ruleReferences
Test plan
kubectl auth can-i update resourceclaims/bindingreturnsnokubectl auth can-i update resourceclaims/bindingreturnsyesresourceclaims/statuspermission unchanged