Skip to content

Issue with distinctAttribute in ResourceClaimTemplate #62

@qasmi

Description

@qasmi

As discussed in this Slack conversation, I am trying to run a misalignment test scenario to measure the impact of misaligned devices on workload performance. During testing, I discovered that the distinctAttribute constraint does not appear to behave as expected.

I am running RKE2 Kubernetes 1.34 with the following feature gates enabled:

--feature-gates=DRAConsumableCapacity=true,DRAResourceClaimDeviceStatus=true

My test workload configuration is as follows:


apiVersion: batch/v1
kind: Job
metadata:
  name: gpu-cpu
spec:
  template:
    spec:
      restartPolicy: Never
      containers:
      - image: ghcr.io/coreweave/nccl-tests:12.9.1-devel-ubuntu22.04-nccl2.28.3-1-8b67957
        name: test
        command:
          - /bin/bash
          - -c
          - |
            set -eux
            sleep infinity  
        resources:
          claims:
          - name: gpu
      resourceClaims:
        - name: gpu
          resourceClaimTemplateName: gpu-cpu
---
apiVersion: resource.k8s.io/v1
kind: ResourceClaimTemplate
metadata:
  name: gpu-cpu
spec:
  spec:
    devices:
      requests:
      - name: nic
        exactly:
          deviceClassName: dra.net
          count: 1
          selectors:
          - cel:
              expression: device.attributes["dra.net"].rdma == true
      - name: cpu
        exactly:
          deviceClassName: dra.cpu
          capacity:
            requests:
              dra.cpu/cpu: "5"
      constraints:
      - distinctAttribute: "dra.net/numaNode"
        requests: ["cpu", "nic"]

The ResourceClaim status shows the following allocation:


Name:         gpu-cpu-qx5ms-gpu-694lq
Namespace:    default
Labels:       <none>
Annotations:  resource.kubernetes.io/pod-claim-name: gpu
API Version:  resource.k8s.io/v1
Kind:         ResourceClaim
Metadata:
  Creation Timestamp:  2026-02-16T11:14:11Z
  Finalizers:
    resource.kubernetes.io/delete-protection
  Generate Name:  gpu-cpu-qx5ms-gpu-
  Owner References:
    API Version:           v1
    Block Owner Deletion:  true
    Controller:            true
    Kind:                  Pod
    Name:                  gpu-cpu-qx5ms
    UID:                   5a685ca8-56a4-473d-a6ea-7f804adb81d2
  Resource Version:        32226
  UID:                     de2970e8-ea8b-493d-8a92-109e82c7a44b
Spec:
  Devices:
    Constraints:
      Distinct Attribute:  dra.net/numaNode
      Requests:
        cpu
        nic
    Requests:
      Exactly:
        Allocation Mode:    ExactCount
        Count:              1
        Device Class Name:  dra.net
        Selectors:
          Cel:
            Expression:  device.attributes["dra.net"].rdma == true
      Name:              nic
      Exactly:
        Allocation Mode:  ExactCount
        Capacity:
          Requests:
            dra.cpu/cpu:    5
        Count:              1
        Device Class Name:  dra.cpu
      Name:                 cpu
Status:
  Allocation:
    Devices:
      Results:
        Device:   pci-0000-3f-00-0
        Driver:   dra.net
        Pool:     x-03-21
        Request:  nic
        Consumed Capacity:
          dra.cpu/cpu:  5
        Device:         cpudevnuma000
        Driver:         dra.cpu
        Pool:           x-03-21
        Request:        cpu
        Share ID:       1f4fd174-a7b9-4a52-a998-7e044a2e0990
    Node Selector:
      Node Selector Terms:
        Match Fields:
          Key:       metadata.name
          Operator:  In
          Values:
            x-03-21
  Reserved For:
    Name:      gpu-cpu-qx5ms
    Resource:  pods
    UID:       5a685ca8-56a4-473d-a6ea-7f804adb81d2
Events:        <none>

Despite specifying distinctAttribute: "dra.net/numaNode" in the ResourceClaimTemplate, the NIC (pci-0000-3f-00-0) and CPU (cpudevnuma000) were allocated on the same NUMA node in this case.

CPU resourceSlice:

    Attributes:
      dra.cpu/numCPUs:
        Int:  144
      dra.cpu/numaNodeID:
        Int:  0
      dra.cpu/smtEnabled:
        Bool:  true
      dra.cpu/socketID:
        Int:  0
      dra.net/numaNode:
        Int:  0
    Capacity:
      dra.cpu/cpu:
        Value:                   144
    Name:                        cpudevnuma000
    Allow Multiple Allocations:  true

DRANET resourceSlice:

    Attributes:
      dra.net/alias:
        String:
      dra.net/ebpf:
        Bool:  false
      dra.net/encapsulation:
        String:  ether
      dra.net/ifName:
        String:  ens1006f0np0
      dra.net/mtu:
        Int:  9216
      dra.net/numaNode:
        Int:  0
      dra.net/pciAddress:
        String:  0000:3f:00.0
      dra.net/pciDevice:
        String:  MT43244 BlueField-3 integrated ConnectX-7 network controller
      dra.net/pciSubsystem:
        String:  0021
      dra.net/pciVendor:
        String:  Mellanox Technologies
      dra.net/rdma:
        Bool:  true
      dra.net/sriov:
        Bool:  true
      dra.net/sriovVfs:
        Int:  0
      dra.net/state:
        String:  up
      dra.net/type:
        String:  device
      dra.net/virtual:
        Bool:  false
      resource.kubernetes.io/pcieRoot:
        String:  pci0000:3a
    Name:        pci-0000-3f-00-0

However, at the host level, the container process appears to be running on a CPU from NUMA 1:

ps -Leo pid,tid,psr,comm | grep 2349207
2349207 2349207  84 sleep

Then numactl:

numactl -H
available: 2 nodes (0-1)
node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215
node 0 size: 772578 MB
node 0 free: 333969 MB
node 1 cpus: 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287
node 1 size: 774077 MB
node 1 free: 523307 MB
node distances:
node   0   1
  0:  10  21
  1:  21  10

Am I understanding this correctly? Even though the CPU is allocated from NUMA 0 in the ResourceClaim, the container is allowed to run on any CPU by the scheduler, which can result in the process actually executing on a core from NUMA 1 ?

I also checked the cpuset inside the container:

tests# cat /sys/fs/cgroup/cpuset.cpus.effective
0-287

Does this mean that the container has access to all cpu cores over all numa nodes ?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions