Summary
Support a mixed CPU allocation mode where a Pod can request:
- isolated CPUs through CPU DRA
- additional fractional CPU from the regular shared CPU pool
A concrete example is:
- Pod CPU request/limit:
5.5
- CPU DRA request:
5
Expected result:
5 isolated CPUs are allocated through CPU DRA
- the remaining
0.5 CPU stays available for non-critical application threads
Motivation
High-performance workloads such as DPDK applications often need a fixed set of isolated CPUs for dataplane threads, but still need a small amount of non-dedicated CPU for helper, control-plane, logging, or housekeeping threads.
Today, the common workaround is to over-request dedicated CPUs and manage thread placement manually. That wastes isolated capacity and makes the deployment harder to express and automate.
Requested behavior
When a Pod combines a fractional CPU request with an integer CPU DRA request, the implementation should:
- allocate the integer DRA portion as isolated CPUs
- keep the fractional remainder outside the dedicated CPU set
- avoid forcing all workload threads onto the isolated CPUs
- preserve the dedicated CPUs for the latency-sensitive part of the workload
Using the example above, a Pod that requests 5.5 CPUs and 5 CPUs through CPU DRA should end up with:
5 isolated CPUs for the dataplane / dedicated threads
0.5 CPU worth of regular shared runtime for non-critical threads
Application-facing API / discoverability
The implementation should also expose the allocated isolated CPU set to the Pod using the existing DRA application-facing mechanism, so user space applications can discover which CPUs are dedicated without relying on out-of-band configuration.
That data should be usable by applications such as DPDK to:
- pin dataplane threads to the dedicated CPU set
- keep helper / control threads off the dedicated CPUs
- document a recommended way to launch an application so non-critical threads do not consume the isolated cores
Why this matters
This would make it possible to express a very common real-world pattern:
- dedicated CPUs for the hot path
- a small shared CPU budget for everything else
That improves utilization of isolated CPUs and reduces the need for manual cpuset handling in the application.
Acceptance criteria
- a Pod requesting
5.5 CPUs and 5 CPUs through CPU DRA gets exactly 5 isolated CPUs plus 0.5 CPU from the shared pool
- the isolated CPU set is exposed to the application through the existing DRA application-facing path
- example documentation shows how to pin dataplane threads to the dedicated CPU set and keep non-critical threads off those CPUs
- the behavior is clearly defined for NUMA alignment, SMT siblings, and interaction with CPU manager policies
Summary
Support a mixed CPU allocation mode where a Pod can request:
A concrete example is:
5.55Expected result:
5isolated CPUs are allocated through CPU DRA0.5CPU stays available for non-critical application threadsMotivation
High-performance workloads such as DPDK applications often need a fixed set of isolated CPUs for dataplane threads, but still need a small amount of non-dedicated CPU for helper, control-plane, logging, or housekeeping threads.
Today, the common workaround is to over-request dedicated CPUs and manage thread placement manually. That wastes isolated capacity and makes the deployment harder to express and automate.
Requested behavior
When a Pod combines a fractional CPU request with an integer CPU DRA request, the implementation should:
Using the example above, a Pod that requests
5.5CPUs and5CPUs through CPU DRA should end up with:5isolated CPUs for the dataplane / dedicated threads0.5CPU worth of regular shared runtime for non-critical threadsApplication-facing API / discoverability
The implementation should also expose the allocated isolated CPU set to the Pod using the existing DRA application-facing mechanism, so user space applications can discover which CPUs are dedicated without relying on out-of-band configuration.
That data should be usable by applications such as DPDK to:
Why this matters
This would make it possible to express a very common real-world pattern:
That improves utilization of isolated CPUs and reduces the need for manual cpuset handling in the application.
Acceptance criteria
5.5CPUs and5CPUs through CPU DRA gets exactly5isolated CPUs plus0.5CPU from the shared pool