refactor(scheduler): migrate NodeInfo, PodInfo, and plugin resource r…#1146
refactor(scheduler): migrate NodeInfo, PodInfo, and plugin resource r…#1146
Conversation
📝 WalkthroughWalkthroughThe pull request refactors the scheduler's resource representation system, migrating from scalar Resource pointers to vector-based ResourceVector value types with an accompanying ResourceVectorMap for indexed access. This systematic change affects resource allocation, topology calculations, error handling, and logging throughout multiple scheduler components and plugins. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Comment |
📊 Performance Benchmark ResultsComparing PR ( Legend
Raw benchmark dataPR branch: Main branch: |
…eads to vectors Convert all read-path methods from Resource to ResourceVector operations: - NodeInfo: IsTaskAllocatable, FittingError, GetSumOfIdleGPUs, IsCPUOnlyNode - External plugins: proportion, topology, nodeplacement, nodeavailability, resourcetype - Framework: session logging, statement references - Error handling: pod_errors, job_errors Add AcceptedResourceVector to PodInfo. Add QuantifyVector util to proportion plugin. Rewrite topology calcNodeAccommodation from iterative pod probing to division-based vector approach. Resource fields still maintained via dual-write for backward compatibility until removal in subsequent commits. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
db49cb7 to
4a7b05d
Compare
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 7
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
pkg/scheduler/actions/common/allocate.go (1)
177-190:⚠️ Potential issue | 🟡 MinorFix log wording to match vector payload.
These logs now print
task.ResReqVector(full resource vector), but the message still says “requires … GPUs”. That is misleading during debugging.Suggested patch
- log.InfraLogger.V(6).Infof("Binding Task <%v/%v> to node <%v>, requires: %v GPUs", + log.InfraLogger.V(6).Infof("Binding Task <%v/%v> to node <%v>, requires resources vector: %v", task.Namespace, task.Name, node.Name, task.ResReqVector) - log.InfraLogger.V(6).Infof("Pipelining Task <%v/%v> to node <%v> requires: %v GPUs", + log.InfraLogger.V(6).Infof("Pipelining Task <%v/%v> to node <%v> requires resources vector: %v", task.Namespace, task.Name, node.Name, task.ResReqVector)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/scheduler/actions/common/allocate.go` around lines 177 - 190, Update the log messages that currently print task.ResReqVector but say "requires: ... GPUs" to accurately reflect the payload; change the wording in the binding log (the log right before stmt.Allocate(...) inside the Allocate/Bind function) and in pipelineTaskToNode(...) so they mention "requires resource vector" or "requires resources" (or similar) instead of "requires: %v GPUs", keeping the same %v placeholder for task.ResReqVector.
🧹 Nitpick comments (9)
pkg/scheduler/actions/common/solvers/accumulated_scenario_filters/idle_gpus/idle_gpus.go (1)
154-154: Use a single constant for the GPU vector key.Both Line 154 and Line 177 hardcode
"gpu". Centralizing this key reduces typo/drift risk while vector migration is still in progress.Suggested refactor
const ( idleGpuFilterName = "AccumulatedIdleGpus" nonAccumulatedScenarioBaseError = "accumulatedIdleGpus requires all the filters scenarios using the same instance to be based on the same scenario with accumulation of potential victims. " requiredResourcesDiffError = "The pending task %s didn't appear in the scenario given at the AccumulatedIdleGpus ctor" recordedVictimsDiffError = "The recorded victims should remain the same between the different scenario filtering. %d cache hits, pre update recorded tasks seen %d, post update recorded tasks seen %d" potentialVictimsDiffError = "The list of potential victims for the current scenario should contain the previous list of potential victims. Only %d out %d tasks are the contained in the current scenario" + gpuVectorKey = "gpu" ) @@ - requiredResources = append(requiredResources, pod.ResReqVector.Get(pod.VectorMap.GetIndex("gpu"))) + requiredResources = append(requiredResources, pod.ResReqVector.Get(pod.VectorMap.GetIndex(gpuVectorKey))) @@ - ig.nodesNameToIdleGpus[task.NodeName] += task.AcceptedResourceVector.Get(task.VectorMap.GetIndex("gpu")) + ig.nodesNameToIdleGpus[task.NodeName] += task.AcceptedResourceVector.Get(task.VectorMap.GetIndex(gpuVectorKey))Also applies to: 177-177
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/scheduler/actions/common/solvers/accumulated_scenario_filters/idle_gpus/idle_gpus.go` at line 154, The code hardcodes "gpu" in multiple places (e.g., in the requiredResources append using pod.ResReqVector.Get(pod.VectorMap.GetIndex("gpu")) and the other occurrence at line 177); introduce a single file-level constant (e.g., const gpuVectorKey = "gpu") and replace all hardcoded "gpu" usages with that constant (use pod.VectorMap.GetIndex(gpuVectorKey) wherever needed) so the key is centralized and avoids drift during vector migration.pkg/scheduler/actions/common/solvers/accumulated_scenario_filters/idle_gpus/idle_gpus_test.go (1)
28-28: Prefer per-test vector map instances over a shared package variable.Line 28 introduces mutable shared test state. A per-test map keeps fixtures isolated and avoids hidden coupling as more vector keys are introduced in future tests.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/scheduler/actions/common/solvers/accumulated_scenario_filters/idle_gpus/idle_gpus_test.go` at line 28, Replace the package-level mutable testVectorMap with per-test instances to avoid shared mutable state: remove the var testVectorMap = resource_info.NewResourceVectorMap() and instantiate resource_info.NewResourceVectorMap() inside each test (or a test setup helper) that currently references testVectorMap (search for usages of testVectorMap in idle_gpus_test.go) so each test gets its own fresh ResourceVectorMap fixture.pkg/scheduler/plugins/proportion/proportion_test.go (1)
623-627: Reduce duplicated resource fixture setup forResReq/ResReqVector.The same requirement is repeatedly parsed twice per pod (once for
ResReq, once forResReqVector). Create the requirement once and derive both fields from it to avoid fixture drift.♻️ Example pattern
-ResReq: common_info.BuildResourceRequirements("1", "1G"), -ResReqVector: common_info.BuildResourceRequirements("1", "1G").ToVector(testVectorMap), +req := common_info.BuildResourceRequirements("1", "1G") +ResReq: req, +ResReqVector: req.ToVector(testVectorMap),Also applies to: 634-638, 661-665, 672-676, 686-690, 713-717, 724-728, 738-742, 843-847, 875-879, 881-885, 887-891, 921-925, 927-931, 958-962, 964-968
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/scheduler/plugins/proportion/proportion_test.go` around lines 623 - 627, The test fixtures duplicate parsing by calling common_info.BuildResourceRequirements twice for ResReq and ResReqVector; fix by creating a single requirement variable (e.g., req := common_info.BuildResourceRequirements("2", "2G")) and reuse it for both fields so ResReq: req and ResReqVector: req.ToVector(testVectorMap); apply the same refactor to all similar blocks that build resources (lines referenced in the comment) to avoid fixture drift and keep testVectorMap usage consistent.pkg/scheduler/plugins/proportion/utils/utils.go (1)
24-24: Add a GoDoc comment for exportedQuantifyVector.Please add a short GoDoc comment starting with
QuantifyVector ....As per coding guidelines "Add GoDoc-style comments for exported functions and types."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/scheduler/plugins/proportion/utils/utils.go` at line 24, QuantifyVector is missing a GoDoc comment; add a GoDoc-style comment directly above the exported function QuantifyVector that begins with "QuantifyVector ..." and briefly describes what the function does, its parameters (vec and vectorMap) and the returned rs.ResourceQuantities to satisfy the project's documentation guidelines.pkg/scheduler/plugins/proportion/proportion.go (1)
221-239: RefactorgetResourcesto single-pass accumulation and nil-skip.Current implementation allocates an intermediate slice and then re-iterates. A single pass is simpler and safer if any
AcceptedResourceVectoris nil.Suggested patch
func getResources(ignoreReallocatedTasks bool, pods ...*pod_info.PodInfo) resource_info.ResourceVector { - var vectors []resource_info.ResourceVector + var total resource_info.ResourceVector for _, task := range pods { if ignoreReallocatedTasks && pod_status.IsActiveAllocatedStatus(task.Status) { continue } - vectors = append(vectors, task.AcceptedResourceVector) - } - - if len(vectors) == 0 { - return nil - } - - total := vectors[0].Clone() - for _, vec := range vectors[1:] { + vec := task.AcceptedResourceVector + if vec == nil { + continue + } + if total == nil { + total = vec.Clone() + continue + } total.Add(vec) } return total }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/scheduler/plugins/proportion/proportion.go` around lines 221 - 239, The getResources function currently builds an intermediate slice then sums it; instead, iterate once over pods, skip tasks where ignoreReallocatedTasks && pod_status.IsActiveAllocatedStatus(task.Status) and also skip when task.AcceptedResourceVector is nil, and accumulate directly into a single resource_info.ResourceVector named total by setting total = task.AcceptedResourceVector.Clone() for the first non-nil vector and total.Add(vec) for subsequent ones; if no non-nil vectors are seen return nil. Use the existing function name getResources, types resource_info.ResourceVector and pod_info.PodInfo, and the AcceptedResourceVector field/Clone/Add methods referenced in the diff.pkg/scheduler/cache/cluster_info/cluster_info.go (1)
285-285: Replace the hardcoded GPU key with the shared constant.Using a string literal here is brittle when a canonical constant already exists in this file.
♻️ Proposed cleanup
- if nodeInfo.AllocatableVector.Get(nodeInfo.VectorMap.GetIndex("gpu")) > 0 { + if nodeInfo.AllocatableVector.Get(nodeInfo.VectorMap.GetIndex(constants.GpuResource)) > 0 {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/scheduler/cache/cluster_info/cluster_info.go` at line 285, The code uses a hardcoded string literal "gpu" in nodeInfo.VectorMap.GetIndex("gpu") when checking nodeInfo.AllocatableVector.Get(...); replace that literal with the package-level/shared GPU constant (use the existing canonical constant in this file, e.g., GPUResourceKey) so the check reads nodeInfo.VectorMap.GetIndex(GPUResourceKey) (keep using nodeInfo, AllocatableVector.Get and VectorMap.GetIndex as-is).pkg/scheduler/plugins/topology/job_filtering.go (1)
113-124: Extract duplicated task-vector aggregation into one helper.The same accumulation logic exists in
getTasksAllocationMetadataandsortTreeFromRoot; consolidating it will reduce drift risk.Also applies to: 431-440
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/scheduler/plugins/topology/job_filtering.go` around lines 113 - 124, The aggregation logic that builds a combined ResourceVector and task count is duplicated in getTasksAllocationMetadata and in sortTreeFromRoot; extract it into a single helper (e.g., aggregateTasksResources or computeTasksResourceSummary) that accepts []*pod_info.PodInfo and returns (resource_info.ResourceVector, int), move the loop that clones the first task.ResReqVector and adds subsequent vectors into that helper, and replace the bodies of getTasksAllocationMetadata and the code in sortTreeFromRoot (and the duplicate at lines ~431-440) to call the new helper to obtain tasksResources and tasksCount.pkg/scheduler/plugins/proportion/reclaimable/strategies/strategies.go (1)
18-24: Add a GoDoc comment for exportedFitsReclaimStrategy.The exported function signature was updated, but it still has no GoDoc comment.
As per coding guidelines "Add GoDoc-style comments for exported functions and types".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/scheduler/plugins/proportion/reclaimable/strategies/strategies.go` around lines 18 - 24, Add a GoDoc comment for the exported function FitsReclaimStrategy that briefly describes what the function does and explains its parameters and return value: mention that it determines whether a reclaimer (reclaimerResources) can reclaim resources from a reclaimee given the resource vector map (vectorMap), queue attributes for reclaimer and reclaimee (reclaimerQueue, reclaimeeQueue) and the reclaimee's remaining share (reclaimeeRemainingShare), and that it returns a bool indicating whether the reclaim fits; place the comment immediately above the FitsReclaimStrategy declaration and follow GoDoc conventions (starts with the function name and is a complete sentence).pkg/scheduler/plugins/proportion/reclaimable/reclaimable.go (1)
6-20: Reorder imports into standard / external / internal groups.
k8s.io/api/core/v1is an external dependency and should be separated from internal imports.♻️ Suggested import grouping
import ( "fmt" "maps" "math" "strings" + v1 "k8s.io/api/core/v1" + commonconstants "github.com/NVIDIA/KAI-scheduler/pkg/common/constants" "github.com/NVIDIA/KAI-scheduler/pkg/scheduler/api/common_info" "github.com/NVIDIA/KAI-scheduler/pkg/scheduler/api/resource_info" "github.com/NVIDIA/KAI-scheduler/pkg/scheduler/log" "github.com/NVIDIA/KAI-scheduler/pkg/scheduler/plugins/proportion/reclaimable/strategies" rs "github.com/NVIDIA/KAI-scheduler/pkg/scheduler/plugins/proportion/resource_share" "github.com/NVIDIA/KAI-scheduler/pkg/scheduler/plugins/proportion/utils" - v1 "k8s.io/api/core/v1" )As per coding guidelines "Organize imports in three groups separated by blank lines: (1) Standard library, (2) External dependencies, (3) Internal packages".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/scheduler/plugins/proportion/reclaimable/reclaimable.go` around lines 6 - 20, Reorder the import block in reclaimable.go into three groups separated by blank lines: (1) standard library imports (fmt, maps, math, strings), (2) external dependencies (v1 "k8s.io/api/core/v1"), and (3) internal packages (commonconstants, common_info, resource_info, log, strategies, rs, utils) while preserving existing aliases (e.g., v1 and rs) and import order within each group to satisfy the project's import grouping guideline.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@pkg/scheduler/api/common_info/pod_errors.go`:
- Around line 80-83: The code reads vectors using indices from
vectorMap.GetIndex without checking for -1; update the logic around the uses of
vectorMap.GetIndex for migProfile (and the analogous scalar/resource access
later) to first capture the index (e.g., migIdx :=
vectorMap.GetIndex(string(migProfile))) and guard with if migIdx >= 0 before
calling availableVector.Get(migIdx) or capacityVector.Get(migIdx); if the index
is < 0, return or surface an explicit error/log indicating the resource/MIG
profile is unregistered (so you don't silently treat it as zero), mirroring the
existing pattern used elsewhere in resource_vector.go.
In `@pkg/scheduler/api/node_info/node_info.go`:
- Around line 394-403: The helper lessEqualVectorsExcludingGPU mutates its input
ResourceVector values (a and b) which can race; change it to a non-mutating
comparison: get gpuIdx via ni.VectorMap.GetIndex(commonconstants.GpuResource)
and then loop over all valid indices (or use a length accessor) comparing
a.Get(i) <= b.Get(i) for each i except gpuIdx, returning false on the first
violation and true otherwise. Ensure you reference and use only
ResourceVector.Get and the VectorMap/GetIndex (no Set calls) so neither
task.ResReqVector nor node vectors are modified during the check.
In `@pkg/scheduler/plugins/proportion/capacity_policy/capacity_policy.go`:
- Around line 79-83: getRequiredQuota iterates tasksToAllocate and blindly calls
utils.QuantifyVector(pod.ResReqVector, pod.VectorMap) which will nil-deref for
pods that haven't initialized vector fields; add a defensive check in
getRequiredQuota: if pod.ResReqVector==nil || pod.VectorMap==nil then compute
quantities from the pod's scalar resource fields (e.g., CPU/milliCPU, Memory,
GPU scalar fields the pod struct exposes) and add those to quota.GPU,
quota.MilliCPU and quota.Memory, otherwise call utils.QuantifyVector as before;
update variable names (quantities, quota.GPU/MilliCPU/Memory) accordingly so
behavior is identical for vectorized pods.
In `@pkg/scheduler/plugins/proportion/proportion_test.go`:
- Around line 811-817: The node's VectorMap and AllocatableVector are being
built from a newly created vectorMap, which can diverge from the test fixtures'
testVectorMap and break vector index alignment; change the setup so
testData.node.VectorMap is assigned the existing testVectorMap (use
testVectorMap instead of a locally created vectorMap) and compute
testData.node.AllocatableVector via
testData.node.Allocatable.ToVector(testVectorMap) so getNodeResources and pod
vectors share the same ResourceVectorMap instance.
In `@pkg/scheduler/plugins/proportion/reclaimable/reclaimable.go`:
- Around line 269-289: getInvolvedResourcesNames currently only checks the
GpuResource slot; add a check for the MIG-backed GPU slot so MIG-only vectors
also mark GPU as involved: retrieve the MIG index from vectorMap (similar to
cpuIdx/memIdx/gpuIdx, e.g. migIdx :=
vectorMap.GetIndex(commonconstants.MigResource) or the project's MIG constant),
ensure the index is valid, and if vec.Get(migIdx) > 0 set
involvedResources[rs.GpuResource] = struct{}{} (keep the existing checks for
cpuIdx/memIdx/gpuIdx intact in the getInvolvedResourcesNames function).
In `@pkg/scheduler/plugins/topology/job_filtering_test.go`:
- Around line 31-32: Tests currently mix a global testVectorMap (created via
resource_info.NewResourceVectorMap()) with per-tree VectorMap instances causing
inconsistent vector/indices; replace the global testVectorMap by creating a
single VectorMap inside each test and pass that same VectorMap instance to every
topology/tree creation in that test (instead of creating new maps inside
functions), update usages referencing testVectorMap so all calls use the local
VectorMap, and remove any other per-tree NewResourceVectorMap() calls so each
test runs end-to-end with one consistent VectorMap.
In `@pkg/scheduler/plugins/topology/topology_plugin.go`:
- Around line 59-64: If nodes can be empty, ensure sharedVectorMap is
initialized to a non-nil default instead of staying nil: after the loop that
sets sharedVectorMap from nodeInfo.VectorMap, check if sharedVectorMap == nil
and, if so, create and assign a new resource_info.ResourceVectorMap (or call the
existing constructor/helper) before it is used or assigned into topology trees
(references: sharedVectorMap, nodes, resource_info.ResourceVectorMap); this
prevents nil derefs when later converting/indexing vectors.
---
Outside diff comments:
In `@pkg/scheduler/actions/common/allocate.go`:
- Around line 177-190: Update the log messages that currently print
task.ResReqVector but say "requires: ... GPUs" to accurately reflect the
payload; change the wording in the binding log (the log right before
stmt.Allocate(...) inside the Allocate/Bind function) and in
pipelineTaskToNode(...) so they mention "requires resource vector" or "requires
resources" (or similar) instead of "requires: %v GPUs", keeping the same %v
placeholder for task.ResReqVector.
---
Nitpick comments:
In
`@pkg/scheduler/actions/common/solvers/accumulated_scenario_filters/idle_gpus/idle_gpus_test.go`:
- Line 28: Replace the package-level mutable testVectorMap with per-test
instances to avoid shared mutable state: remove the var testVectorMap =
resource_info.NewResourceVectorMap() and instantiate
resource_info.NewResourceVectorMap() inside each test (or a test setup helper)
that currently references testVectorMap (search for usages of testVectorMap in
idle_gpus_test.go) so each test gets its own fresh ResourceVectorMap fixture.
In
`@pkg/scheduler/actions/common/solvers/accumulated_scenario_filters/idle_gpus/idle_gpus.go`:
- Line 154: The code hardcodes "gpu" in multiple places (e.g., in the
requiredResources append using
pod.ResReqVector.Get(pod.VectorMap.GetIndex("gpu")) and the other occurrence at
line 177); introduce a single file-level constant (e.g., const gpuVectorKey =
"gpu") and replace all hardcoded "gpu" usages with that constant (use
pod.VectorMap.GetIndex(gpuVectorKey) wherever needed) so the key is centralized
and avoids drift during vector migration.
In `@pkg/scheduler/cache/cluster_info/cluster_info.go`:
- Line 285: The code uses a hardcoded string literal "gpu" in
nodeInfo.VectorMap.GetIndex("gpu") when checking
nodeInfo.AllocatableVector.Get(...); replace that literal with the
package-level/shared GPU constant (use the existing canonical constant in this
file, e.g., GPUResourceKey) so the check reads
nodeInfo.VectorMap.GetIndex(GPUResourceKey) (keep using nodeInfo,
AllocatableVector.Get and VectorMap.GetIndex as-is).
In `@pkg/scheduler/plugins/proportion/proportion_test.go`:
- Around line 623-627: The test fixtures duplicate parsing by calling
common_info.BuildResourceRequirements twice for ResReq and ResReqVector; fix by
creating a single requirement variable (e.g., req :=
common_info.BuildResourceRequirements("2", "2G")) and reuse it for both fields
so ResReq: req and ResReqVector: req.ToVector(testVectorMap); apply the same
refactor to all similar blocks that build resources (lines referenced in the
comment) to avoid fixture drift and keep testVectorMap usage consistent.
In `@pkg/scheduler/plugins/proportion/proportion.go`:
- Around line 221-239: The getResources function currently builds an
intermediate slice then sums it; instead, iterate once over pods, skip tasks
where ignoreReallocatedTasks && pod_status.IsActiveAllocatedStatus(task.Status)
and also skip when task.AcceptedResourceVector is nil, and accumulate directly
into a single resource_info.ResourceVector named total by setting total =
task.AcceptedResourceVector.Clone() for the first non-nil vector and
total.Add(vec) for subsequent ones; if no non-nil vectors are seen return nil.
Use the existing function name getResources, types resource_info.ResourceVector
and pod_info.PodInfo, and the AcceptedResourceVector field/Clone/Add methods
referenced in the diff.
In `@pkg/scheduler/plugins/proportion/reclaimable/reclaimable.go`:
- Around line 6-20: Reorder the import block in reclaimable.go into three groups
separated by blank lines: (1) standard library imports (fmt, maps, math,
strings), (2) external dependencies (v1 "k8s.io/api/core/v1"), and (3) internal
packages (commonconstants, common_info, resource_info, log, strategies, rs,
utils) while preserving existing aliases (e.g., v1 and rs) and import order
within each group to satisfy the project's import grouping guideline.
In `@pkg/scheduler/plugins/proportion/reclaimable/strategies/strategies.go`:
- Around line 18-24: Add a GoDoc comment for the exported function
FitsReclaimStrategy that briefly describes what the function does and explains
its parameters and return value: mention that it determines whether a reclaimer
(reclaimerResources) can reclaim resources from a reclaimee given the resource
vector map (vectorMap), queue attributes for reclaimer and reclaimee
(reclaimerQueue, reclaimeeQueue) and the reclaimee's remaining share
(reclaimeeRemainingShare), and that it returns a bool indicating whether the
reclaim fits; place the comment immediately above the FitsReclaimStrategy
declaration and follow GoDoc conventions (starts with the function name and is a
complete sentence).
In `@pkg/scheduler/plugins/proportion/utils/utils.go`:
- Line 24: QuantifyVector is missing a GoDoc comment; add a GoDoc-style comment
directly above the exported function QuantifyVector that begins with
"QuantifyVector ..." and briefly describes what the function does, its
parameters (vec and vectorMap) and the returned rs.ResourceQuantities to satisfy
the project's documentation guidelines.
In `@pkg/scheduler/plugins/topology/job_filtering.go`:
- Around line 113-124: The aggregation logic that builds a combined
ResourceVector and task count is duplicated in getTasksAllocationMetadata and in
sortTreeFromRoot; extract it into a single helper (e.g., aggregateTasksResources
or computeTasksResourceSummary) that accepts []*pod_info.PodInfo and returns
(resource_info.ResourceVector, int), move the loop that clones the first
task.ResReqVector and adds subsequent vectors into that helper, and replace the
bodies of getTasksAllocationMetadata and the code in sortTreeFromRoot (and the
duplicate at lines ~431-440) to call the new helper to obtain tasksResources and
tasksCount.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 3560b2aa-a32c-4a45-a10b-4d963473d8f0
📒 Files selected for processing (38)
pkg/scheduler/actions/common/allocate.gopkg/scheduler/actions/common/feasible_nodes_test.gopkg/scheduler/actions/common/solvers/accumulated_scenario_filters/idle_gpus/idle_gpus.gopkg/scheduler/actions/common/solvers/accumulated_scenario_filters/idle_gpus/idle_gpus_test.gopkg/scheduler/actions/common/solvers/pod_scenario_builder_test.gopkg/scheduler/actions/utils/job_order_by_queue_test.gopkg/scheduler/api/common_info/job_errors.gopkg/scheduler/api/common_info/job_errors_test.gopkg/scheduler/api/common_info/pod_errors.gopkg/scheduler/api/common_info/pod_errors_test.gopkg/scheduler/api/node_info/gpu_sharing_node_info.gopkg/scheduler/api/node_info/node_info.gopkg/scheduler/api/node_info/node_info_test.gopkg/scheduler/api/pod_info/pod_info.gopkg/scheduler/cache/cluster_info/cluster_info.gopkg/scheduler/framework/session.gopkg/scheduler/framework/statement.gopkg/scheduler/plugins/nodeavailability/nodeavailability.gopkg/scheduler/plugins/nodeplacement/nodepack_test.gopkg/scheduler/plugins/nodeplacement/nodespread_test.gopkg/scheduler/plugins/nodeplacement/pack.gopkg/scheduler/plugins/nodeplacement/spread.gopkg/scheduler/plugins/proportion/capacity_policy/capacity_policy.gopkg/scheduler/plugins/proportion/capacity_policy/capacity_policy_test.gopkg/scheduler/plugins/proportion/proportion.gopkg/scheduler/plugins/proportion/proportion_test.gopkg/scheduler/plugins/proportion/reclaimable/reclaimable.gopkg/scheduler/plugins/proportion/reclaimable/reclaimable_test.gopkg/scheduler/plugins/proportion/reclaimable/reclaimer_info.gopkg/scheduler/plugins/proportion/reclaimable/strategies/strategies.gopkg/scheduler/plugins/proportion/reclaimable/strategies/strategies_test.gopkg/scheduler/plugins/proportion/utils/utils.gopkg/scheduler/plugins/resourcetype/resourcetype.gopkg/scheduler/plugins/topology/job_filtering.gopkg/scheduler/plugins/topology/job_filtering_test.gopkg/scheduler/plugins/topology/node_scoring_test.gopkg/scheduler/plugins/topology/topology_plugin.gopkg/scheduler/plugins/topology/topology_structs.go
| migIdx := vectorMap.GetIndex(string(migProfile)) | ||
| availableMigProfilesQuant := int64(availableVector.Get(migIdx)) | ||
| capacityMigProfilesQuant := int64(capacityVector.Get(migIdx)) | ||
| if availableMigProfilesQuant < quant { |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "=== Finding ResourceVectorMap/Get implementation ==="
fd resource_vector.go -t f 2>/dev/null || echo "File not found, searching in pkg/scheduler"
find . -name "resource_vector.go" -type f 2>/dev/null | head -5
echo
echo "=== Finding pod_errors.go ==="
find . -name "pod_errors.go" -type f 2>/dev/null | head -5Repository: NVIDIA/KAI-Scheduler
Length of output: 297
🏁 Script executed:
#!/bin/bash
# Let's search for the resource_vector.go file and examine GetIndex and Get implementations
rg -l "ResourceVectorMap|ResourceVector" --type go | head -10Repository: NVIDIA/KAI-Scheduler
Length of output: 590
🏁 Script executed:
#!/bin/bash
# Search for GetIndex and Get method implementations
rg -n "func.*GetIndex\(" --type go -A 5
rg -n "func.*\.Get\(" --type go -A 5 | head -50Repository: NVIDIA/KAI-Scheduler
Length of output: 1284
🏁 Script executed:
#!/bin/bash
# Read the actual pod_errors.go file to see lines 80-83 and 137-145
cat pkg/scheduler/api/common_info/pod_errors.go | head -150Repository: NVIDIA/KAI-Scheduler
Length of output: 5664
🏁 Script executed:
#!/bin/bash
# Let's find the ResourceVector.Get implementation
rg -n "func.*ResourceVector.*Get" --type go -A 10 | grep -A 10 "func (.*ResourceVector) Get"Repository: NVIDIA/KAI-Scheduler
Length of output: 899
🏁 Script executed:
#!/bin/bash
# Check what Get returns for invalid index
rg -n "Get\(" pkg/scheduler/api/resource_info/resource_vector.go -B 2 -A 10 | head -60Repository: NVIDIA/KAI-Scheduler
Length of output: 2240
🏁 Script executed:
#!/bin/bash
# Examine the ResourceVector type and Get method more carefully
sed -n '1,300p' pkg/scheduler/api/resource_info/resource_vector.go | tail -150Repository: NVIDIA/KAI-Scheduler
Length of output: 4369
🏁 Script executed:
#!/bin/bash
# Check how GetIndex followed by Get is used elsewhere in the codebase
rg -n "GetIndex.*\(" --type go -A 3 | grep -A 3 "\.Get(" | head -100Repository: NVIDIA/KAI-Scheduler
Length of output: 7695
🏁 Script executed:
#!/bin/bash
# Check for explicit index guard patterns before Get calls
rg -n "if.*GetIndex.*>=" --type go -B 2 -A 2Repository: NVIDIA/KAI-Scheduler
Length of output: 1298
Add guards before reading missing resource indices from vectors.
GetIndex returns -1 for unregistered resources, and while Get(-1) safely returns 0 (no panic risk), this masks resource accounting errors. When a MIG profile or scalar resource is requested but absent from the vectorMap, the code silently treats it as 0 available, which can produce incorrect error messages.
This pattern is inconsistent with other vector access in resource_vector.go (lines 211–212, 233–234, 238–240), where explicit if idx >= 0 guards precede vector operations. For consistency and to catch misconfigurations early, add similar guards at lines 80 and 137.
Suggested hardening pattern
- migIdx := vectorMap.GetIndex(string(migProfile))
- availableMigProfilesQuant := int64(availableVector.Get(migIdx))
- capacityMigProfilesQuant := int64(capacityVector.Get(migIdx))
+ migIdx := vectorMap.GetIndex(string(migProfile))
+ if migIdx >= 0 {
+ availableMigProfilesQuant := int64(availableVector.Get(migIdx))
+ capacityMigProfilesQuant := int64(capacityVector.Get(migIdx))
if availableMigProfilesQuant < quant {- scalarIdx := vectorMap.GetIndex(string(requestedResourceName))
- availableResourceQuant := int64(availableVector.Get(scalarIdx))
- capacityResourceQuant := int64(capacityVector.Get(scalarIdx))
+ scalarIdx := vectorMap.GetIndex(string(requestedResourceName))
+ if scalarIdx >= 0 {
+ availableResourceQuant := int64(availableVector.Get(scalarIdx))
+ capacityResourceQuant := int64(capacityVector.Get(scalarIdx))
if availableResourceQuant < requestedResourceQuant {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@pkg/scheduler/api/common_info/pod_errors.go` around lines 80 - 83, The code
reads vectors using indices from vectorMap.GetIndex without checking for -1;
update the logic around the uses of vectorMap.GetIndex for migProfile (and the
analogous scalar/resource access later) to first capture the index (e.g., migIdx
:= vectorMap.GetIndex(string(migProfile))) and guard with if migIdx >= 0 before
calling availableVector.Get(migIdx) or capacityVector.Get(migIdx); if the index
is < 0, return or surface an explicit error/log indicating the resource/MIG
profile is unregistered (so you don't silently treat it as zero), mirroring the
existing pattern used elsewhere in resource_vector.go.
| func (ni *NodeInfo) lessEqualVectorsExcludingGPU(a, b resource_info.ResourceVector) bool { | ||
| gpuIdx := ni.VectorMap.GetIndex(commonconstants.GpuResource) | ||
| savedA := a.Get(gpuIdx) | ||
| savedB := b.Get(gpuIdx) | ||
| a.Set(gpuIdx, 0) | ||
| b.Set(gpuIdx, 0) | ||
| result := a.LessEqual(b) | ||
| a.Set(gpuIdx, savedA) | ||
| b.Set(gpuIdx, savedB) | ||
| return result |
There was a problem hiding this comment.
Avoid mutating shared vectors in resource comparisons.
This helper temporarily writes into both inputs (a and b). That mutates task.ResReqVector and node vectors during feasibility checks and can race under concurrent node evaluation.
Proposed non-mutating fix
func (ni *NodeInfo) lessEqualVectorsExcludingGPU(a, b resource_info.ResourceVector) bool {
gpuIdx := ni.VectorMap.GetIndex(commonconstants.GpuResource)
- savedA := a.Get(gpuIdx)
- savedB := b.Get(gpuIdx)
- a.Set(gpuIdx, 0)
- b.Set(gpuIdx, 0)
- result := a.LessEqual(b)
- a.Set(gpuIdx, savedA)
- b.Set(gpuIdx, savedB)
- return result
+ if len(a) != len(b) {
+ return false
+ }
+ for i := 0; i < len(a); i++ {
+ if i == gpuIdx {
+ continue
+ }
+ if a.Get(i) > b.Get(i) {
+ return false
+ }
+ }
+ return true
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@pkg/scheduler/api/node_info/node_info.go` around lines 394 - 403, The helper
lessEqualVectorsExcludingGPU mutates its input ResourceVector values (a and b)
which can race; change it to a non-mutating comparison: get gpuIdx via
ni.VectorMap.GetIndex(commonconstants.GpuResource) and then loop over all valid
indices (or use a length accessor) comparing a.Get(i) <= b.Get(i) for each i
except gpuIdx, returning false on the first violation and true otherwise. Ensure
you reference and use only ResourceVector.Get and the VectorMap/GetIndex (no Set
calls) so neither task.ResReqVector nor node vectors are modified during the
check.
| for _, pod := range tasksToAllocate { | ||
| quota.GPU += pod.ResReq.GetGpusQuota() | ||
| quota.MilliCPU += pod.ResReq.Cpu() | ||
| quota.Memory += pod.ResReq.Memory() | ||
| quantities := utils.QuantifyVector(pod.ResReqVector, pod.VectorMap) | ||
| quota.GPU += quantities[rs.GpuResource] | ||
| quota.MilliCPU += quantities[rs.CpuResource] | ||
| quota.Memory += quantities[rs.MemoryResource] |
There was a problem hiding this comment.
Add fallback for pods without vector fields initialized.
getRequiredQuota now assumes ResReqVector and VectorMap are always present. Add a scalar fallback to avoid nil dereference / bad quota in transitional paths.
🛠️ Suggested defensive fallback
func getRequiredQuota(tasksToAllocate []*pod_info.PodInfo) *podgroup_info.JobRequirement {
quota := podgroup_info.JobRequirement{}
for _, pod := range tasksToAllocate {
- quantities := utils.QuantifyVector(pod.ResReqVector, pod.VectorMap)
+ var quantities rs.ResourceQuantities
+ if pod.VectorMap == nil || pod.ResReqVector == nil {
+ if pod.ResReq == nil {
+ continue
+ }
+ quantities = utils.QuantifyResourceRequirements(pod.ResReq)
+ } else {
+ quantities = utils.QuantifyVector(pod.ResReqVector, pod.VectorMap)
+ }
quota.GPU += quantities[rs.GpuResource]
quota.MilliCPU += quantities[rs.CpuResource]
quota.Memory += quantities[rs.MemoryResource]
}
return "a
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@pkg/scheduler/plugins/proportion/capacity_policy/capacity_policy.go` around
lines 79 - 83, getRequiredQuota iterates tasksToAllocate and blindly calls
utils.QuantifyVector(pod.ResReqVector, pod.VectorMap) which will nil-deref for
pods that haven't initialized vector fields; add a defensive check in
getRequiredQuota: if pod.ResReqVector==nil || pod.VectorMap==nil then compute
quantities from the pod's scalar resource fields (e.g., CPU/milliCPU, Memory,
GPU scalar fields the pod struct exposes) and add those to quota.GPU,
quota.MilliCPU and quota.Memory, otherwise call utils.QuantifyVector as before;
update variable names (quantities, quota.GPU/MilliCPU/Memory) accordingly so
behavior is identical for vectorized pods.
| vectorMap := resource_info.NewResourceVectorMap() | ||
| for rName := range testData.node.Allocatable.ScalarResources() { | ||
| vectorMap.AddResource(string(rName)) | ||
| } | ||
| testData.node.VectorMap = vectorMap | ||
| testData.node.AllocatableVector = testData.node.Allocatable.ToVector(vectorMap) | ||
| if got := getNodeResources(session, testData.node); !reflect.DeepEqual(got, testData.want) { |
There was a problem hiding this comment.
Ensure node and pod vectors use the same ResourceVectorMap instance.
In Line 811-Line 817, testData.node.VectorMap is local, while several pod fixtures in the same tests use testVectorMap. This can silently misalign indices if either map gains additional resources, making vector math assertions brittle.
💡 Suggested fix
vectorMap := resource_info.NewResourceVectorMap()
for rName := range testData.node.Allocatable.ScalarResources() {
vectorMap.AddResource(string(rName))
}
testData.node.VectorMap = vectorMap
testData.node.AllocatableVector = testData.node.Allocatable.ToVector(vectorMap)
+for _, p := range testData.node.PodInfos {
+ if p != nil && p.ResReq != nil {
+ p.VectorMap = vectorMap
+ p.ResReqVector = p.ResReq.ToVector(vectorMap)
+ }
+}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@pkg/scheduler/plugins/proportion/proportion_test.go` around lines 811 - 817,
The node's VectorMap and AllocatableVector are being built from a newly created
vectorMap, which can diverge from the test fixtures' testVectorMap and break
vector index alignment; change the setup so testData.node.VectorMap is assigned
the existing testVectorMap (use testVectorMap instead of a locally created
vectorMap) and compute testData.node.AllocatableVector via
testData.node.Allocatable.ToVector(testVectorMap) so getNodeResources and pod
vectors share the same ResourceVectorMap instance.
| func getInvolvedResourcesNames(resources []resource_info.ResourceVector, vectorMap *resource_info.ResourceVectorMap) map[rs.ResourceName]any { | ||
| involvedResources := map[rs.ResourceName]any{} | ||
| for _, resource := range resources { | ||
| if resource == nil { | ||
| cpuIdx := vectorMap.GetIndex(string(v1.ResourceCPU)) | ||
| memIdx := vectorMap.GetIndex(string(v1.ResourceMemory)) | ||
| gpuIdx := vectorMap.GetIndex(commonconstants.GpuResource) | ||
|
|
||
| for _, vec := range resources { | ||
| if vec == nil { | ||
| continue | ||
| } | ||
|
|
||
| if resource.Cpu() > 0 { | ||
| if vec.Get(cpuIdx) > 0 { | ||
| involvedResources[rs.CpuResource] = struct{}{} | ||
| } | ||
|
|
||
| if resource.Memory() > 0 { | ||
| if vec.Get(memIdx) > 0 { | ||
| involvedResources[rs.MemoryResource] = struct{}{} | ||
| } | ||
|
|
||
| if resource.GPUs() > 0 { | ||
| if vec.Get(gpuIdx) > 0 { | ||
| involvedResources[rs.GpuResource] = struct{}{} |
There was a problem hiding this comment.
Include MIG-backed GPU demand when collecting involved resources.
getInvolvedResourcesNames only marks GPU involvement from the GpuResource slot. MIG-only vectors will not mark GPU as involved, so GPU saturation checks can be skipped even though utils.QuantifyVector counts MIG toward GPU quota.
🐛 Suggested fix
func getInvolvedResourcesNames(resources []resource_info.ResourceVector, vectorMap *resource_info.ResourceVectorMap) map[rs.ResourceName]any {
involvedResources := map[rs.ResourceName]any{}
- cpuIdx := vectorMap.GetIndex(string(v1.ResourceCPU))
- memIdx := vectorMap.GetIndex(string(v1.ResourceMemory))
- gpuIdx := vectorMap.GetIndex(commonconstants.GpuResource)
for _, vec := range resources {
if vec == nil {
continue
}
- if vec.Get(cpuIdx) > 0 {
+ quantified := utils.QuantifyVector(vec, vectorMap)
+ if quantified[rs.CpuResource] > 0 {
involvedResources[rs.CpuResource] = struct{}{}
}
- if vec.Get(memIdx) > 0 {
+ if quantified[rs.MemoryResource] > 0 {
involvedResources[rs.MemoryResource] = struct{}{}
}
- if vec.Get(gpuIdx) > 0 {
+ if quantified[rs.GpuResource] > 0 {
involvedResources[rs.GpuResource] = struct{}{}
}
}
return involvedResources
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@pkg/scheduler/plugins/proportion/reclaimable/reclaimable.go` around lines 269
- 289, getInvolvedResourcesNames currently only checks the GpuResource slot; add
a check for the MIG-backed GPU slot so MIG-only vectors also mark GPU as
involved: retrieve the MIG index from vectorMap (similar to
cpuIdx/memIdx/gpuIdx, e.g. migIdx :=
vectorMap.GetIndex(commonconstants.MigResource) or the project's MIG constant),
ensure the index is valid, and if vec.Get(migIdx) > 0 set
involvedResources[rs.GpuResource] = struct{}{} (keep the existing checks for
cpuIdx/memIdx/gpuIdx intact in the getInvolvedResourcesNames function).
| var testVectorMap = resource_info.NewResourceVectorMap() | ||
|
|
There was a problem hiding this comment.
Use a single VectorMap per test topology end-to-end.
This file mixes a global testVectorMap with per-tree VectorMap instances. That can make vectors and indices inconsistent, creating order-dependent/false-positive tests.
✅ Suggested direction
-var testVectorMap = resource_info.NewResourceVectorMap()
+func newTestVectorMap() *resource_info.ResourceVectorMap {
+ return resource_info.NewResourceVectorMap()
+}- setupTopologyTree: func() *Info {
+ setupTopologyTree: func() *Info {
+ vm := newTestVectorMap()
tree := &Info{
- Name: "test-topology",
- VectorMap: resource_info.NewResourceVectorMap(),
+ Name: "test-topology",
+ VectorMap: vm,
...
- IdleOrReleasingVector: resource_info.NewResource(0, 0, 0).ToVector(testVectorMap),
+ IdleOrReleasingVector: resource_info.NewResource(0, 0, 0).ToVector(vm),- tt.job.RootSubGroupSet.GetAllPodSets(), tasksResources.ToVector(testVectorMap), tasksCount,
+ tt.job.RootSubGroupSet.GetAllPodSets(), tasksResources.ToVector(tt.topologyTree.VectorMap), tasksCount,Also applies to: 92-93, 107-108, 2126-2126
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@pkg/scheduler/plugins/topology/job_filtering_test.go` around lines 31 - 32,
Tests currently mix a global testVectorMap (created via
resource_info.NewResourceVectorMap()) with per-tree VectorMap instances causing
inconsistent vector/indices; replace the global testVectorMap by creating a
single VectorMap inside each test and pass that same VectorMap instance to every
topology/tree creation in that test (instead of creating new maps inside
functions), update usages referencing testVectorMap so all calls use the local
VectorMap, and remove any other per-tree NewResourceVectorMap() calls so each
test runs end-to-end with one consistent VectorMap.
| // Get VectorMap from any node (all share the same map) | ||
| var sharedVectorMap *resource_info.ResourceVectorMap | ||
| for _, nodeInfo := range nodes { | ||
| sharedVectorMap = nodeInfo.VectorMap | ||
| break | ||
| } |
There was a problem hiding this comment.
Ensure VectorMap is never nil when no nodes exist.
If nodes is empty, sharedVectorMap stays nil and gets assigned into every topology tree. Any later vector conversion/indexing against that map can fail at runtime.
🛠️ Proposed fix
func (t *topologyPlugin) initializeTopologyTree(topologies []*kaiv1alpha1.Topology, nodes map[string]*node_info.NodeInfo) {
- // Get VectorMap from any node (all share the same map)
- var sharedVectorMap *resource_info.ResourceVectorMap
+ sharedVectorMap := resource_info.NewResourceVectorMap()
for _, nodeInfo := range nodes {
- sharedVectorMap = nodeInfo.VectorMap
- break
+ if nodeInfo != nil && nodeInfo.VectorMap != nil {
+ sharedVectorMap = nodeInfo.VectorMap
+ break
+ }
}Also applies to: 75-75
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@pkg/scheduler/plugins/topology/topology_plugin.go` around lines 59 - 64, If
nodes can be empty, ensure sharedVectorMap is initialized to a non-nil default
instead of staying nil: after the loop that sets sharedVectorMap from
nodeInfo.VectorMap, check if sharedVectorMap == nil and, if so, create and
assign a new resource_info.ResourceVectorMap (or call the existing
constructor/helper) before it is used or assigned into topology trees
(references: sharedVectorMap, nodes, resource_info.ResourceVectorMap); this
prevents nil derefs when later converting/indexing vectors.
…eads to vectors
Description
Convert all read-path methods from Resource to ResourceVector operations:
Add AcceptedResourceVector to PodInfo. Add QuantifyVector util to proportion plugin. Rewrite topology calcNodeAccommodation from iterative pod probing to division-based vector approach.
Resource fields still maintained via dual-write for backward compatibility until removal in subsequent commits.
Related Issues
Fixes #
Checklist
Summary by CodeRabbit
Release Notes