Skip to content

refactor(scheduler): migrate NodeInfo, PodInfo, and plugin resource r…#1146

Open
enoodle wants to merge 1 commit intomainfrom
erez/migrate-node-info-pod-info-to-vectors
Open

refactor(scheduler): migrate NodeInfo, PodInfo, and plugin resource r…#1146
enoodle wants to merge 1 commit intomainfrom
erez/migrate-node-info-pod-info-to-vectors

Conversation

@enoodle
Copy link
Collaborator

@enoodle enoodle commented Mar 5, 2026

…eads to vectors

Description

Convert all read-path methods from Resource to ResourceVector operations:

  • NodeInfo: IsTaskAllocatable, FittingError, GetSumOfIdleGPUs, IsCPUOnlyNode
  • External plugins: proportion, topology, nodeplacement, nodeavailability, resourcetype
  • Framework: session logging, statement references
  • Error handling: pod_errors, job_errors

Add AcceptedResourceVector to PodInfo. Add QuantifyVector util to proportion plugin. Rewrite topology calcNodeAccommodation from iterative pod probing to division-based vector approach.

Resource fields still maintained via dual-write for backward compatibility until removal in subsequent commits.

Related Issues

Fixes #

Checklist

  • Self-reviewed
  • Added/updated tests (if needed)
  • Updated documentation (if needed)

Summary by CodeRabbit

Release Notes

  • Refactor
    • Modernized resource tracking infrastructure by transitioning from scalar resource representations to vector-based resource tracking, enabling more precise allocation of GPUs, CPUs, memory, and custom resources.
    • Enhanced resource requirement comparisons and capacity calculations throughout the scheduler to use vector operations.
    • Updated error reporting for resource-related failures to provide more detailed, vector-aware diagnostics.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 5, 2026

📝 Walkthrough

Walkthrough

The pull request refactors the scheduler's resource representation system, migrating from scalar Resource pointers to vector-based ResourceVector value types with an accompanying ResourceVectorMap for indexed access. This systematic change affects resource allocation, topology calculations, error handling, and logging throughout multiple scheduler components and plugins.

Changes

Cohort / File(s) Summary
Core Resource Representation & Error Handling
pkg/scheduler/api/common_info/job_errors.go, pkg/scheduler/api/common_info/job_errors_test.go, pkg/scheduler/api/common_info/pod_errors.go, pkg/scheduler/api/common_info/pod_errors_test.go
Updated error constructors to accept ResourceVector and ResourceVectorMap instead of scalar Resource pointers. NewTopologyInsufficientResourcesError and NewFitErrorInsufficientResource now use vector-based resource parameters with indexed access for GPU, CPU, memory, and MIG resources. Test fixtures updated to use ToVector(vectorMap) for resource construction.
Node & Pod Info Structures
pkg/scheduler/api/node_info/node_info.go, pkg/scheduler/api/node_info/node_info_test.go, pkg/scheduler/api/pod_info/pod_info.go
Introduced AllocatableVector, IdleVector, ReleasingVector, and VectorMap fields to NodeInfo and AcceptedResourceVector to PodInfo. Updated initialization, cloning, and SetVectorMap logic to maintain vector representations. lessEqualTaskToNodeResources and isTaskAllocatableOnNonAllocatedResources now accept ResourceVector parameters.
GPU Sharing & Node Availability
pkg/scheduler/api/node_info/gpu_sharing_node_info.go, pkg/scheduler/cache/cluster_info/cluster_info.go, pkg/scheduler/plugins/nodeavailability/nodeavailability.go, pkg/scheduler/plugins/resourcetype/resourcetype.go
Replaced scalar GPU counting (Idle.GPUs(), Used.GPUs()) with vector-indexed access (IdleVector.Get(gpuIdx), UsedVector.Get(gpuIdx)) using VectorMap.GetIndex("gpu"). GPU resource accounting now fully vector-based.
Scheduler Framework & Session
pkg/scheduler/framework/session.go, pkg/scheduler/framework/statement.go
Updated GPU resource accounting and logging to use vector-based representations (IdleVector, UsedVector, ReleasingVector, ResReqVector). Replaced scalar Idle/Releasing/Used counts with indexed vector access.
Node Placement Plugins
pkg/scheduler/plugins/nodeplacement/pack.go, pkg/scheduler/plugins/nodeplacement/spread.go, pkg/scheduler/plugins/nodeplacement/nodepack_test.go, pkg/scheduler/plugins/nodeplacement/nodespread_test.go
Replaced Allocatable.Get(resourceName) with AllocatableVector.Get(node.VectorMap.GetIndex(string(resourceName))) for resource availability checks. Test setup now initializes AllocatableVector, IdleVector, ReleasingVector, and VectorMap.
Capacity & Proportion Plugins
pkg/scheduler/plugins/proportion/capacity_policy/capacity_policy.go, pkg/scheduler/plugins/proportion/capacity_policy/capacity_policy_test.go, pkg/scheduler/plugins/proportion/proportion.go, pkg/scheduler/plugins/proportion/proportion_test.go
Replaced per-task resource accounting with vector-based representation. QuantifyVector utility introduced for vector quantification. Refactored totalVictimsResources to store ResourceVector slices. Updated quota extraction and accumulation to use vector operations.
Reclaimability & Strategies
pkg/scheduler/plugins/proportion/reclaimable/reclaimer_info.go, pkg/scheduler/plugins/proportion/reclaimable/reclaimable.go, pkg/scheduler/plugins/proportion/reclaimable/reclaimable_test.go, pkg/scheduler/plugins/proportion/reclaimable/strategies/strategies.go, pkg/scheduler/plugins/proportion/reclaimable/strategies/strategies_test.go
Updated ReclaimerInfo.RequiredResources to use ResourceVector and added VectorMap field. Reclaimable method signatures now accept ResourceVector and vectorMap. Updated all quota calculations to use QuantifyVector with vector map.
Utility Functions
pkg/scheduler/plugins/proportion/utils/utils.go
Added QuantifyVector function to convert ResourceVector to ResourceQuantities using a ResourceVectorMap for index-based resource extraction.
Topology Plugin
pkg/scheduler/plugins/topology/job_filtering.go, pkg/scheduler/plugins/topology/job_filtering_test.go, pkg/scheduler/plugins/topology/node_scoring_test.go, pkg/scheduler/plugins/topology/topology_plugin.go, pkg/scheduler/plugins/topology/topology_structs.go
Extensive refactoring replacing scalar Resource with ResourceVector throughout topology allocation logic. Updated getTasksAllocationMetadata, getJobAllocatableDomains, checkJobDomainFit, and related methods to use vector-based resources. DomainInfo.IdleOrReleasingResources replaced with IdleOrReleasingVector. Info struct now includes VectorMap field.
Scheduler Actions & Tests
pkg/scheduler/actions/common/allocate.go, pkg/scheduler/actions/common/feasible_nodes_test.go, pkg/scheduler/actions/common/solvers/accumulated_scenario_filters/idle_gpus/idle_gpus.go, pkg/scheduler/actions/common/solvers/accumulated_scenario_filters/idle_gpus/idle_gpus_test.go, pkg/scheduler/actions/common/solvers/pod_scenario_builder_test.go
Updated log messages to use ResReqVector instead of ResReq. Test initialization now includes ResourceVectorMap setup and vector initialization for nodes. Updated GPU accounting to use vector-indexed access.
Job Ordering Utilities
pkg/scheduler/actions/utils/job_order_by_queue_test.go
Updated test fixtures to include VectorMap, AcceptedResourceVector, and AllocatedVector alongside or replacing scalar resource fields. Test data structures now support vector-based resource representations.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Poem

🐰 Vectors now dance where scalars once stood
Resource maps guide the scheduler's wood
From pointers to values, the system takes flight
With indexed resource pools burning bright
A migration complete, the logic shines right!

🚥 Pre-merge checks | ✅ 1 | ❌ 2

❌ Failed checks (1 warning, 1 inconclusive)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 5.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Title check ❓ Inconclusive Title is truncated and incomplete. It reads 'refactor(scheduler): migrate NodeInfo, PodInfo, and plugin resource r…' which cuts off mid-word, making it unclear what specific resource concept is being addressed. Complete the PR title to clearly convey the full scope, e.g., 'refactor(scheduler): migrate NodeInfo, PodInfo, and plugin resources to vectors' for clarity.
✅ Passed checks (1 passed)
Check name Status Explanation
Description check ✅ Passed The PR description covers the main changes adequately, explaining the conversion to ResourceVector operations and updated plugin behavior. However, it lacks detailed context for understanding the full scope and rationale for the architectural change.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch erez/migrate-node-info-pod-info-to-vectors

Comment @coderabbitai help to get the list of available commands and usage tips.

@github-actions
Copy link

github-actions bot commented Mar 5, 2026

📊 Performance Benchmark Results

Comparing PR (erez/migrate-node-info-pod-info-to-vectors) vs main branch:

main-bench.txt:155: parsing iteration count: invalid syntax
pr-bench.txt:155: parsing iteration count: invalid syntax
goos: linux
goarch: amd64
pkg: github.com/NVIDIA/KAI-scheduler/pkg/scheduler/actions
cpu: AMD EPYC 7763 64-Core Processor                
                                    │ main-bench.txt │            pr-bench.txt            │
                                    │     sec/op     │    sec/op     vs base              │
AllocateAction_SmallCluster-4           108.8m ±  0%   109.1m ±  3%  +0.19% (p=0.009 n=6)
AllocateAction_MediumCluster-4          138.7m ±  2%   138.5m ±  1%       ~ (p=0.394 n=6)
AllocateAction_LargeCluster-4           236.2m ± 11%   222.6m ± 12%       ~ (p=0.132 n=6)
ReclaimAction_SmallCluster-4            103.2m ±  0%   103.0m ±  0%  -0.14% (p=0.002 n=6)
ReclaimAction_MediumCluster-4           106.7m ±  1%   106.5m ±  1%  -0.14% (p=0.041 n=6)
PreemptAction_SmallCluster-4            104.0m ±  0%   103.9m ±  0%       ~ (p=0.132 n=6)
PreemptAction_MediumCluster-4           114.8m ±  0%   114.1m ±  0%  -0.57% (p=0.004 n=6)
ConsolidationAction_SmallCluster-4      115.1m ±  0%   115.1m ±  0%       ~ (p=0.937 n=6)
ConsolidationAction_MediumCluster-4     209.4m ±  2%   207.8m ±  2%       ~ (p=0.240 n=6)
FullSchedulingCycle_SmallCluster-4      105.9m ±  0%   105.7m ±  0%       ~ (p=0.065 n=6)
FullSchedulingCycle_MediumCluster-4     121.4m ±  1%   121.3m ±  1%       ~ (p=1.000 n=6)
FullSchedulingCycle_LargeCluster-4      164.1m ±  1%   164.6m ±  1%       ~ (p=0.485 n=6)
ManyQueues_MediumCluster-4              143.6m ±  1%   143.6m ±  1%       ~ (p=1.000 n=6)
GangScheduling_MediumCluster-4          161.7m ±  2%   161.9m ±  2%       ~ (p=0.937 n=6)
geomean                                 133.2m         132.5m        -0.53%

                                    │ main-bench.txt │            pr-bench.txt             │
                                    │      B/op      │     B/op       vs base              │
AllocateAction_SmallCluster-4           2.347Mi ± 0%    2.372Mi ± 1%  +1.06% (p=0.002 n=6)
AllocateAction_MediumCluster-4          12.63Mi ± 0%    12.73Mi ± 0%  +0.80% (p=0.002 n=6)
AllocateAction_LargeCluster-4           43.47Mi ± 0%    43.72Mi ± 0%  +0.57% (p=0.002 n=6)
ReclaimAction_SmallCluster-4            995.1Ki ± 1%   1005.9Ki ± 1%  +1.08% (p=0.002 n=6)
ReclaimAction_MediumCluster-4           3.260Mi ± 0%    3.306Mi ± 0%  +1.41% (p=0.002 n=6)
PreemptAction_SmallCluster-4            1.092Mi ± 0%    1.106Mi ± 1%  +1.29% (p=0.002 n=6)
PreemptAction_MediumCluster-4           4.367Mi ± 0%    4.432Mi ± 0%  +1.50% (p=0.002 n=6)
ConsolidationAction_SmallCluster-4      5.899Mi ± 0%    5.908Mi ± 0%  +0.15% (p=0.015 n=6)
ConsolidationAction_MediumCluster-4     48.41Mi ± 0%    48.22Mi ± 0%  -0.40% (p=0.002 n=6)
FullSchedulingCycle_SmallCluster-4      1.522Mi ± 0%    1.540Mi ± 0%  +1.21% (p=0.002 n=6)
FullSchedulingCycle_MediumCluster-4     7.445Mi ± 0%    7.517Mi ± 0%  +0.96% (p=0.002 n=6)
FullSchedulingCycle_LargeCluster-4      24.31Mi ± 0%    24.49Mi ± 0%  +0.72% (p=0.002 n=6)
ManyQueues_MediumCluster-4              17.09Mi ± 0%    17.20Mi ± 0%  +0.59% (p=0.002 n=6)
GangScheduling_MediumCluster-4          18.54Mi ± 0%    18.72Mi ± 0%  +1.01% (p=0.002 n=6)
geomean                                 6.835Mi         6.893Mi       +0.85%

                                    │ main-bench.txt │           pr-bench.txt            │
                                    │   allocs/op    │  allocs/op   vs base              │
AllocateAction_SmallCluster-4            37.85k ± 0%   38.48k ± 0%  +1.67% (p=0.002 n=6)
AllocateAction_MediumCluster-4           331.8k ± 0%   334.3k ± 0%  +0.77% (p=0.002 n=6)
AllocateAction_LargeCluster-4            1.410M ± 0%   1.417M ± 0%  +0.45% (p=0.002 n=6)
ReclaimAction_SmallCluster-4             9.240k ± 0%   9.421k ± 0%  +1.95% (p=0.002 n=6)
ReclaimAction_MediumCluster-4            29.92k ± 0%   30.67k ± 0%  +2.51% (p=0.002 n=6)
PreemptAction_SmallCluster-4             11.93k ± 0%   12.13k ± 0%  +1.72% (p=0.002 n=6)
PreemptAction_MediumCluster-4            41.73k ± 0%   42.58k ± 0%  +2.03% (p=0.002 n=6)
ConsolidationAction_SmallCluster-4       75.93k ± 0%   77.08k ± 0%  +1.52% (p=0.002 n=6)
ConsolidationAction_MediumCluster-4      697.0k ± 0%   706.9k ± 0%  +1.42% (p=0.002 n=6)
FullSchedulingCycle_SmallCluster-4       22.60k ± 0%   23.01k ± 0%  +1.81% (p=0.002 n=6)
FullSchedulingCycle_MediumCluster-4      179.6k ± 0%   181.3k ± 0%  +0.92% (p=0.002 n=6)
FullSchedulingCycle_LargeCluster-4       739.3k ± 0%   743.4k ± 0%  +0.55% (p=0.002 n=6)
ManyQueues_MediumCluster-4               369.8k ± 0%   372.4k ± 0%  +0.69% (p=0.002 n=6)
GangScheduling_MediumCluster-4           608.7k ± 0%   613.6k ± 0%  +0.81% (p=0.002 n=6)
geomean                                  116.7k        118.3k       +1.34%

pkg: github.com/NVIDIA/KAI-scheduler/pkg/scheduler/actions/integration_tests/reclaim
                            │ main-bench.txt │            pr-bench.txt             │
                            │     sec/op     │    sec/op      vs base              │
ReclaimLargeJobs_10Node-4      105.4m ± 0%     105.3m ± 0%         ~ (p=0.818 n=6)
ReclaimLargeJobs_50Node-4      145.6m ± 1%     144.6m ± 1%         ~ (p=0.699 n=6)
ReclaimLargeJobs_100Node-4     299.2m ± 3%     293.0m ± 3%         ~ (p=0.180 n=6)
ReclaimLargeJobs_200Node-4      1.223 ± 1%      1.156 ± 4%    -5.44% (p=0.002 n=6)
ReclaimLargeJobs_500Node-4      13.88 ± 3%      13.47 ± 2%    -2.96% (p=0.009 n=6)
ReclaimLargeJobs_1000Node-4     117.2 ±  ∞ ¹    113.0 ±  ∞ ¹  -3.64% (p=0.029 n=4)
geomean                         1.446           1.410         -2.50%
¹ need >= 6 samples for confidence interval at level 0.95

                            │ main-bench.txt │             pr-bench.txt             │
                            │      B/op      │      B/op       vs base              │
ReclaimLargeJobs_10Node-4     1.926Mi ± 3%     1.950Mi ± 3%    +1.25% (p=0.041 n=6)
ReclaimLargeJobs_50Node-4     18.02Mi ± 0%     18.18Mi ± 0%    +0.88% (p=0.002 n=6)
ReclaimLargeJobs_100Node-4    61.22Mi ± 0%     61.52Mi ± 0%    +0.49% (p=0.002 n=6)
ReclaimLargeJobs_200Node-4    241.3Mi ± 0%     240.7Mi ± 0%    -0.26% (p=0.002 n=6)
ReclaimLargeJobs_500Node-4    1.738Gi ± 0%     1.706Gi ± 0%    -1.87% (p=0.002 n=6)
ReclaimLargeJobs_1000Node-4   9.089Gi ±  ∞ ¹   8.779Gi ±  ∞ ¹  -3.41% (p=0.029 n=4)
geomean                       142.8Mi          142.1Mi         -0.50%
¹ need >= 6 samples for confidence interval at level 0.95

                            │ main-bench.txt │            pr-bench.txt             │
                            │   allocs/op    │   allocs/op    vs base              │
ReclaimLargeJobs_10Node-4      20.95k ± 3%     21.30k ± 3%    +1.67% (p=0.041 n=6)
ReclaimLargeJobs_50Node-4      240.5k ± 0%     244.2k ± 0%    +1.53% (p=0.002 n=6)
ReclaimLargeJobs_100Node-4     892.1k ± 0%     908.1k ± 0%    +1.79% (p=0.002 n=6)
ReclaimLargeJobs_200Node-4     3.751M ± 0%     3.843M ± 0%    +2.45% (p=0.002 n=6)
ReclaimLargeJobs_500Node-4     29.81M ± 0%     31.00M ± 0%    +3.99% (p=0.002 n=6)
ReclaimLargeJobs_1000Node-4    165.1M ±  ∞ ¹   174.0M ±  ∞ ¹  +5.44% (p=0.029 n=4)
geomean                        2.089M          2.147M         +2.80%
¹ need >= 6 samples for confidence interval at level 0.95

Legend

  • 📉 Negative delta = Performance improvement (faster)
  • 📈 Positive delta = Performance regression (slower)
  • p-value < 0.05 indicates statistically significant change
Raw benchmark data

PR branch:

goos: linux
goarch: amd64
pkg: github.com/NVIDIA/KAI-scheduler/pkg/scheduler/actions
cpu: AMD EPYC 7763 64-Core Processor                
BenchmarkAllocateAction_SmallCluster-4         	       9	 112781773 ns/op	 2513156 B/op	   38497 allocs/op
BenchmarkAllocateAction_SmallCluster-4         	      10	 109056926 ns/op	 2486467 B/op	   38478 allocs/op
BenchmarkAllocateAction_SmallCluster-4         	      10	 108908712 ns/op	 2501942 B/op	   38479 allocs/op
BenchmarkAllocateAction_SmallCluster-4         	      10	 109063391 ns/op	 2486228 B/op	   38476 allocs/op
BenchmarkAllocateAction_SmallCluster-4         	      10	 109122124 ns/op	 2487848 B/op	   38477 allocs/op
BenchmarkAllocateAction_SmallCluster-4         	      10	 108906297 ns/op	 2484823 B/op	   38472 allocs/op
BenchmarkAllocateAction_MediumCluster-4        	       8	 139776320 ns/op	13367388 B/op	  334315 allocs/op
BenchmarkAllocateAction_MediumCluster-4        	       8	 137288808 ns/op	13352732 B/op	  334300 allocs/op
BenchmarkAllocateAction_MediumCluster-4        	       8	 139051494 ns/op	13351365 B/op	  334302 allocs/op
BenchmarkAllocateAction_MediumCluster-4        	       8	 138635947 ns/op	13351735 B/op	  334306 allocs/op
BenchmarkAllocateAction_MediumCluster-4        	       8	 138193787 ns/op	13350173 B/op	  334296 allocs/op
BenchmarkAllocateAction_MediumCluster-4        	       8	 138354366 ns/op	13352571 B/op	  334310 allocs/op
BenchmarkAllocateAction_LargeCluster-4         	       5	 218341938 ns/op	45845241 B/op	 1416705 allocs/op
BenchmarkAllocateAction_LargeCluster-4         	       5	 225115344 ns/op	45848312 B/op	 1416679 allocs/op
BenchmarkAllocateAction_LargeCluster-4         	       5	 233312521 ns/op	45848942 B/op	 1416683 allocs/op
BenchmarkAllocateAction_LargeCluster-4         	       5	 249259765 ns/op	45871070 B/op	 1416694 allocs/op
BenchmarkAllocateAction_LargeCluster-4         	       5	 220011621 ns/op	45842982 B/op	 1416690 allocs/op
BenchmarkAllocateAction_LargeCluster-4         	       5	 219035861 ns/op	45841488 B/op	 1416679 allocs/op
BenchmarkReclaimAction_SmallCluster-4          	      10	 103090989 ns/op	 1026807 B/op	    9394 allocs/op
BenchmarkReclaimAction_SmallCluster-4          	      10	 102990216 ns/op	 1028676 B/op	    9412 allocs/op
BenchmarkReclaimAction_SmallCluster-4          	      10	 103025783 ns/op	 1035277 B/op	    9423 allocs/op
BenchmarkReclaimAction_SmallCluster-4          	      10	 102976566 ns/op	 1031418 B/op	    9421 allocs/op
BenchmarkReclaimAction_SmallCluster-4          	      10	 102975488 ns/op	 1027704 B/op	    9420 allocs/op
BenchmarkReclaimAction_SmallCluster-4          	      10	 103081007 ns/op	 1040162 B/op	    9422 allocs/op
BenchmarkReclaimAction_MediumCluster-4         	      10	 107298079 ns/op	 3465107 B/op	   30673 allocs/op
BenchmarkReclaimAction_MediumCluster-4         	      10	 106593264 ns/op	 3468752 B/op	   30674 allocs/op
BenchmarkReclaimAction_MediumCluster-4         	      10	 106448876 ns/op	 3457032 B/op	   30670 allocs/op
BenchmarkReclaimAction_MediumCluster-4         	      10	 106531345 ns/op	 3468789 B/op	   30674 allocs/op
BenchmarkReclaimAction_MediumCluster-4         	      10	 106434802 ns/op	 3468896 B/op	   30674 allocs/op
BenchmarkReclaimAction_MediumCluster-4         	      10	 106539387 ns/op	 3464821 B/op	   30673 allocs/op
BenchmarkPreemptAction_SmallCluster-4          	      10	 103816391 ns/op	 1157743 B/op	   12131 allocs/op
BenchmarkPreemptAction_SmallCluster-4          	      10	 103913376 ns/op	 1161735 B/op	   12133 allocs/op
BenchmarkPreemptAction_SmallCluster-4          	      10	 103972462 ns/op	 1161800 B/op	   12134 allocs/op
BenchmarkPreemptAction_SmallCluster-4          	      10	 103913708 ns/op	 1157892 B/op	   12133 allocs/op
BenchmarkPreemptAction_SmallCluster-4          	      10	 103910217 ns/op	 1168359 B/op	   12134 allocs/op
BenchmarkPreemptAction_SmallCluster-4          	      10	 103832346 ns/op	 1157868 B/op	   12133 allocs/op
BenchmarkPreemptAction_MediumCluster-4         	       9	 113759829 ns/op	 4651689 B/op	   42581 allocs/op
BenchmarkPreemptAction_MediumCluster-4         	       9	 113814603 ns/op	 4647433 B/op	   42579 allocs/op
BenchmarkPreemptAction_MediumCluster-4         	       9	 114326704 ns/op	 4651755 B/op	   42581 allocs/op
BenchmarkPreemptAction_MediumCluster-4         	       9	 114611897 ns/op	 4647407 B/op	   42580 allocs/op
BenchmarkPreemptAction_MediumCluster-4         	       9	 113972814 ns/op	 4643010 B/op	   42578 allocs/op
BenchmarkPreemptAction_MediumCluster-4         	       9	 114412131 ns/op	 4643024 B/op	   42578 allocs/op
BenchmarkConsolidationAction_SmallCluster-4    	       9	 114821147 ns/op	 6188568 B/op	   77054 allocs/op
BenchmarkConsolidationAction_SmallCluster-4    	       9	 115325202 ns/op	 6191052 B/op	   77073 allocs/op
BenchmarkConsolidationAction_SmallCluster-4    	       9	 115218584 ns/op	 6197991 B/op	   77096 allocs/op
BenchmarkConsolidationAction_SmallCluster-4    	       9	 115021918 ns/op	 6204062 B/op	   77047 allocs/op

Main branch:

goos: linux
goarch: amd64
pkg: github.com/NVIDIA/KAI-scheduler/pkg/scheduler/actions
cpu: AMD EPYC 7763 64-Core Processor                
BenchmarkAllocateAction_SmallCluster-4         	      10	 108955686 ns/op	 2461130 B/op	   37847 allocs/op
BenchmarkAllocateAction_SmallCluster-4         	      10	 108850691 ns/op	 2460807 B/op	   37845 allocs/op
BenchmarkAllocateAction_SmallCluster-4         	      10	 108849463 ns/op	 2461970 B/op	   37849 allocs/op
BenchmarkAllocateAction_SmallCluster-4         	      10	 108847403 ns/op	 2458882 B/op	   37841 allocs/op
BenchmarkAllocateAction_SmallCluster-4         	      10	 108705722 ns/op	 2461128 B/op	   37847 allocs/op
BenchmarkAllocateAction_SmallCluster-4         	      10	 108636273 ns/op	 2472241 B/op	   37847 allocs/op
BenchmarkAllocateAction_MediumCluster-4        	       8	 139302759 ns/op	13266988 B/op	  331756 allocs/op
BenchmarkAllocateAction_MediumCluster-4        	       8	 138662101 ns/op	13246229 B/op	  331757 allocs/op
BenchmarkAllocateAction_MediumCluster-4        	       8	 138336803 ns/op	13245575 B/op	  331751 allocs/op
BenchmarkAllocateAction_MediumCluster-4        	       8	 138816490 ns/op	13245199 B/op	  331748 allocs/op
BenchmarkAllocateAction_MediumCluster-4        	       8	 138585448 ns/op	13245083 B/op	  331750 allocs/op
BenchmarkAllocateAction_MediumCluster-4        	       8	 141821691 ns/op	13247598 B/op	  331744 allocs/op
BenchmarkAllocateAction_LargeCluster-4         	       5	 232258108 ns/op	45587016 B/op	 1410405 allocs/op
BenchmarkAllocateAction_LargeCluster-4         	       5	 245518237 ns/op	45599966 B/op	 1410385 allocs/op
BenchmarkAllocateAction_LargeCluster-4         	       4	 262620886 ns/op	45583104 B/op	 1410376 allocs/op
BenchmarkAllocateAction_LargeCluster-4         	       5	 226576625 ns/op	45591222 B/op	 1410384 allocs/op
BenchmarkAllocateAction_LargeCluster-4         	       5	 226076252 ns/op	45582688 B/op	 1410363 allocs/op
BenchmarkAllocateAction_LargeCluster-4         	       5	 240071122 ns/op	45584051 B/op	 1410381 allocs/op
BenchmarkReclaimAction_SmallCluster-4          	      10	 103196579 ns/op	 1011773 B/op	    9210 allocs/op
BenchmarkReclaimAction_SmallCluster-4          	      10	 103169335 ns/op	 1017445 B/op	    9232 allocs/op
BenchmarkReclaimAction_SmallCluster-4          	      10	 103155131 ns/op	 1016736 B/op	    9239 allocs/op
BenchmarkReclaimAction_SmallCluster-4          	      10	 103148187 ns/op	 1024600 B/op	    9242 allocs/op
BenchmarkReclaimAction_SmallCluster-4          	      10	 103155647 ns/op	 1024452 B/op	    9242 allocs/op
BenchmarkReclaimAction_SmallCluster-4          	      10	 103094501 ns/op	 1020578 B/op	    9241 allocs/op
BenchmarkReclaimAction_MediumCluster-4         	      10	 107396330 ns/op	 3422792 B/op	   29926 allocs/op
BenchmarkReclaimAction_MediumCluster-4         	      10	 106600424 ns/op	 3418567 B/op	   29923 allocs/op
BenchmarkReclaimAction_MediumCluster-4         	      10	 106771491 ns/op	 3418496 B/op	   29923 allocs/op
BenchmarkReclaimAction_MediumCluster-4         	      10	 106665023 ns/op	 3418564 B/op	   29923 allocs/op
BenchmarkReclaimAction_MediumCluster-4         	      10	 106696006 ns/op	 3418277 B/op	   29922 allocs/op
BenchmarkReclaimAction_MediumCluster-4         	      10	 106612038 ns/op	 3422522 B/op	   29925 allocs/op
BenchmarkPreemptAction_SmallCluster-4          	      10	 103900967 ns/op	 1141327 B/op	   11927 allocs/op
BenchmarkPreemptAction_SmallCluster-4          	      10	 104065208 ns/op	 1144992 B/op	   11927 allocs/op
BenchmarkPreemptAction_SmallCluster-4          	      10	 104040611 ns/op	 1141340 B/op	   11927 allocs/op
BenchmarkPreemptAction_SmallCluster-4          	      10	 104013318 ns/op	 1145199 B/op	   11928 allocs/op
BenchmarkPreemptAction_SmallCluster-4          	      10	 103976079 ns/op	 1145245 B/op	   11928 allocs/op
BenchmarkPreemptAction_SmallCluster-4          	      10	 103885567 ns/op	 1145386 B/op	   11928 allocs/op
BenchmarkPreemptAction_MediumCluster-4         	       9	 114709134 ns/op	 4583076 B/op	   41731 allocs/op
BenchmarkPreemptAction_MediumCluster-4         	       9	 114550545 ns/op	 4578502 B/op	   41729 allocs/op
BenchmarkPreemptAction_MediumCluster-4         	       9	 114931982 ns/op	 4578756 B/op	   41730 allocs/op
BenchmarkPreemptAction_MediumCluster-4         	       9	 114796045 ns/op	 4582914 B/op	   41731 allocs/op
BenchmarkPreemptAction_MediumCluster-4         	       9	 114829992 ns/op	 4578818 B/op	   41731 allocs/op
BenchmarkPreemptAction_MediumCluster-4         	       9	 114801087 ns/op	 4578671 B/op	   41730 allocs/op
BenchmarkConsolidationAction_SmallCluster-4    	       9	 114743819 ns/op	 6186845 B/op	   75950 allocs/op
BenchmarkConsolidationAction_SmallCluster-4    	       9	 115486092 ns/op	 6184536 B/op	   75934 allocs/op
BenchmarkConsolidationAction_SmallCluster-4    	       9	 115207966 ns/op	 6182239 B/op	   75912 allocs/op
BenchmarkConsolidationAction_SmallCluster-4    	       9	 115027863 ns/op	 6192651 B/op	   75924 allocs/op

…eads to vectors

Convert all read-path methods from Resource to ResourceVector operations:
- NodeInfo: IsTaskAllocatable, FittingError, GetSumOfIdleGPUs, IsCPUOnlyNode
- External plugins: proportion, topology, nodeplacement, nodeavailability, resourcetype
- Framework: session logging, statement references
- Error handling: pod_errors, job_errors

Add AcceptedResourceVector to PodInfo. Add QuantifyVector util to proportion
plugin. Rewrite topology calcNodeAccommodation from iterative pod probing to
division-based vector approach.

Resource fields still maintained via dual-write for backward compatibility
until removal in subsequent commits.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@enoodle enoodle force-pushed the erez/migrate-node-info-pod-info-to-vectors branch from db49cb7 to 4a7b05d Compare March 5, 2026 12:24
@enoodle
Copy link
Collaborator Author

enoodle commented Mar 5, 2026

@coderabbitai review

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 5, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
pkg/scheduler/actions/common/allocate.go (1)

177-190: ⚠️ Potential issue | 🟡 Minor

Fix log wording to match vector payload.

These logs now print task.ResReqVector (full resource vector), but the message still says “requires … GPUs”. That is misleading during debugging.

Suggested patch
-	log.InfraLogger.V(6).Infof("Binding Task <%v/%v> to node <%v>, requires: %v GPUs",
+	log.InfraLogger.V(6).Infof("Binding Task <%v/%v> to node <%v>, requires resources vector: %v",
 		task.Namespace, task.Name, node.Name, task.ResReqVector)

-	log.InfraLogger.V(6).Infof("Pipelining Task <%v/%v> to node <%v> requires: %v GPUs",
+	log.InfraLogger.V(6).Infof("Pipelining Task <%v/%v> to node <%v> requires resources vector: %v",
 		task.Namespace, task.Name, node.Name, task.ResReqVector)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/scheduler/actions/common/allocate.go` around lines 177 - 190, Update the
log messages that currently print task.ResReqVector but say "requires: ... GPUs"
to accurately reflect the payload; change the wording in the binding log (the
log right before stmt.Allocate(...) inside the Allocate/Bind function) and in
pipelineTaskToNode(...) so they mention "requires resource vector" or "requires
resources" (or similar) instead of "requires: %v GPUs", keeping the same %v
placeholder for task.ResReqVector.
🧹 Nitpick comments (9)
pkg/scheduler/actions/common/solvers/accumulated_scenario_filters/idle_gpus/idle_gpus.go (1)

154-154: Use a single constant for the GPU vector key.

Both Line 154 and Line 177 hardcode "gpu". Centralizing this key reduces typo/drift risk while vector migration is still in progress.

Suggested refactor
 const (
 	idleGpuFilterName               = "AccumulatedIdleGpus"
 	nonAccumulatedScenarioBaseError = "accumulatedIdleGpus requires all the filters scenarios using the same instance to be based on the same scenario with accumulation of potential victims. "
 	requiredResourcesDiffError      = "The pending task %s didn't appear in the scenario given at the AccumulatedIdleGpus ctor"
 	recordedVictimsDiffError        = "The recorded victims should remain the same between the different scenario filtering. %d cache hits, pre update recorded tasks seen %d, post update recorded tasks seen %d"
 	potentialVictimsDiffError       = "The list of potential victims for the current scenario should contain the previous list of potential victims. Only %d out %d tasks are the contained in the current scenario"
+	gpuVectorKey                    = "gpu"
 )
@@
-		requiredResources = append(requiredResources, pod.ResReqVector.Get(pod.VectorMap.GetIndex("gpu")))
+		requiredResources = append(requiredResources, pod.ResReqVector.Get(pod.VectorMap.GetIndex(gpuVectorKey)))
@@
-	ig.nodesNameToIdleGpus[task.NodeName] += task.AcceptedResourceVector.Get(task.VectorMap.GetIndex("gpu"))
+	ig.nodesNameToIdleGpus[task.NodeName] += task.AcceptedResourceVector.Get(task.VectorMap.GetIndex(gpuVectorKey))

Also applies to: 177-177

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@pkg/scheduler/actions/common/solvers/accumulated_scenario_filters/idle_gpus/idle_gpus.go`
at line 154, The code hardcodes "gpu" in multiple places (e.g., in the
requiredResources append using
pod.ResReqVector.Get(pod.VectorMap.GetIndex("gpu")) and the other occurrence at
line 177); introduce a single file-level constant (e.g., const gpuVectorKey =
"gpu") and replace all hardcoded "gpu" usages with that constant (use
pod.VectorMap.GetIndex(gpuVectorKey) wherever needed) so the key is centralized
and avoids drift during vector migration.
pkg/scheduler/actions/common/solvers/accumulated_scenario_filters/idle_gpus/idle_gpus_test.go (1)

28-28: Prefer per-test vector map instances over a shared package variable.

Line 28 introduces mutable shared test state. A per-test map keeps fixtures isolated and avoids hidden coupling as more vector keys are introduced in future tests.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@pkg/scheduler/actions/common/solvers/accumulated_scenario_filters/idle_gpus/idle_gpus_test.go`
at line 28, Replace the package-level mutable testVectorMap with per-test
instances to avoid shared mutable state: remove the var testVectorMap =
resource_info.NewResourceVectorMap() and instantiate
resource_info.NewResourceVectorMap() inside each test (or a test setup helper)
that currently references testVectorMap (search for usages of testVectorMap in
idle_gpus_test.go) so each test gets its own fresh ResourceVectorMap fixture.
pkg/scheduler/plugins/proportion/proportion_test.go (1)

623-627: Reduce duplicated resource fixture setup for ResReq/ResReqVector.

The same requirement is repeatedly parsed twice per pod (once for ResReq, once for ResReqVector). Create the requirement once and derive both fields from it to avoid fixture drift.

♻️ Example pattern
-ResReq:       common_info.BuildResourceRequirements("1", "1G"),
-ResReqVector: common_info.BuildResourceRequirements("1", "1G").ToVector(testVectorMap),
+req := common_info.BuildResourceRequirements("1", "1G")
+ResReq:       req,
+ResReqVector: req.ToVector(testVectorMap),

Also applies to: 634-638, 661-665, 672-676, 686-690, 713-717, 724-728, 738-742, 843-847, 875-879, 881-885, 887-891, 921-925, 927-931, 958-962, 964-968

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/scheduler/plugins/proportion/proportion_test.go` around lines 623 - 627,
The test fixtures duplicate parsing by calling
common_info.BuildResourceRequirements twice for ResReq and ResReqVector; fix by
creating a single requirement variable (e.g., req :=
common_info.BuildResourceRequirements("2", "2G")) and reuse it for both fields
so ResReq: req and ResReqVector: req.ToVector(testVectorMap); apply the same
refactor to all similar blocks that build resources (lines referenced in the
comment) to avoid fixture drift and keep testVectorMap usage consistent.
pkg/scheduler/plugins/proportion/utils/utils.go (1)

24-24: Add a GoDoc comment for exported QuantifyVector.

Please add a short GoDoc comment starting with QuantifyVector ....

As per coding guidelines "Add GoDoc-style comments for exported functions and types."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/scheduler/plugins/proportion/utils/utils.go` at line 24, QuantifyVector
is missing a GoDoc comment; add a GoDoc-style comment directly above the
exported function QuantifyVector that begins with "QuantifyVector ..." and
briefly describes what the function does, its parameters (vec and vectorMap) and
the returned rs.ResourceQuantities to satisfy the project's documentation
guidelines.
pkg/scheduler/plugins/proportion/proportion.go (1)

221-239: Refactor getResources to single-pass accumulation and nil-skip.

Current implementation allocates an intermediate slice and then re-iterates. A single pass is simpler and safer if any AcceptedResourceVector is nil.

Suggested patch
 func getResources(ignoreReallocatedTasks bool, pods ...*pod_info.PodInfo) resource_info.ResourceVector {
-	var vectors []resource_info.ResourceVector
+	var total resource_info.ResourceVector
 	for _, task := range pods {
 		if ignoreReallocatedTasks && pod_status.IsActiveAllocatedStatus(task.Status) {
 			continue
 		}
-		vectors = append(vectors, task.AcceptedResourceVector)
-	}
-
-	if len(vectors) == 0 {
-		return nil
-	}
-
-	total := vectors[0].Clone()
-	for _, vec := range vectors[1:] {
+		vec := task.AcceptedResourceVector
+		if vec == nil {
+			continue
+		}
+		if total == nil {
+			total = vec.Clone()
+			continue
+		}
 		total.Add(vec)
 	}
 
 	return total
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/scheduler/plugins/proportion/proportion.go` around lines 221 - 239, The
getResources function currently builds an intermediate slice then sums it;
instead, iterate once over pods, skip tasks where ignoreReallocatedTasks &&
pod_status.IsActiveAllocatedStatus(task.Status) and also skip when
task.AcceptedResourceVector is nil, and accumulate directly into a single
resource_info.ResourceVector named total by setting total =
task.AcceptedResourceVector.Clone() for the first non-nil vector and
total.Add(vec) for subsequent ones; if no non-nil vectors are seen return nil.
Use the existing function name getResources, types resource_info.ResourceVector
and pod_info.PodInfo, and the AcceptedResourceVector field/Clone/Add methods
referenced in the diff.
pkg/scheduler/cache/cluster_info/cluster_info.go (1)

285-285: Replace the hardcoded GPU key with the shared constant.

Using a string literal here is brittle when a canonical constant already exists in this file.

♻️ Proposed cleanup
-			if nodeInfo.AllocatableVector.Get(nodeInfo.VectorMap.GetIndex("gpu")) > 0 {
+			if nodeInfo.AllocatableVector.Get(nodeInfo.VectorMap.GetIndex(constants.GpuResource)) > 0 {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/scheduler/cache/cluster_info/cluster_info.go` at line 285, The code uses
a hardcoded string literal "gpu" in nodeInfo.VectorMap.GetIndex("gpu") when
checking nodeInfo.AllocatableVector.Get(...); replace that literal with the
package-level/shared GPU constant (use the existing canonical constant in this
file, e.g., GPUResourceKey) so the check reads
nodeInfo.VectorMap.GetIndex(GPUResourceKey) (keep using nodeInfo,
AllocatableVector.Get and VectorMap.GetIndex as-is).
pkg/scheduler/plugins/topology/job_filtering.go (1)

113-124: Extract duplicated task-vector aggregation into one helper.

The same accumulation logic exists in getTasksAllocationMetadata and sortTreeFromRoot; consolidating it will reduce drift risk.

Also applies to: 431-440

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/scheduler/plugins/topology/job_filtering.go` around lines 113 - 124, The
aggregation logic that builds a combined ResourceVector and task count is
duplicated in getTasksAllocationMetadata and in sortTreeFromRoot; extract it
into a single helper (e.g., aggregateTasksResources or
computeTasksResourceSummary) that accepts []*pod_info.PodInfo and returns
(resource_info.ResourceVector, int), move the loop that clones the first
task.ResReqVector and adds subsequent vectors into that helper, and replace the
bodies of getTasksAllocationMetadata and the code in sortTreeFromRoot (and the
duplicate at lines ~431-440) to call the new helper to obtain tasksResources and
tasksCount.
pkg/scheduler/plugins/proportion/reclaimable/strategies/strategies.go (1)

18-24: Add a GoDoc comment for exported FitsReclaimStrategy.

The exported function signature was updated, but it still has no GoDoc comment.

As per coding guidelines "Add GoDoc-style comments for exported functions and types".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/scheduler/plugins/proportion/reclaimable/strategies/strategies.go` around
lines 18 - 24, Add a GoDoc comment for the exported function FitsReclaimStrategy
that briefly describes what the function does and explains its parameters and
return value: mention that it determines whether a reclaimer
(reclaimerResources) can reclaim resources from a reclaimee given the resource
vector map (vectorMap), queue attributes for reclaimer and reclaimee
(reclaimerQueue, reclaimeeQueue) and the reclaimee's remaining share
(reclaimeeRemainingShare), and that it returns a bool indicating whether the
reclaim fits; place the comment immediately above the FitsReclaimStrategy
declaration and follow GoDoc conventions (starts with the function name and is a
complete sentence).
pkg/scheduler/plugins/proportion/reclaimable/reclaimable.go (1)

6-20: Reorder imports into standard / external / internal groups.

k8s.io/api/core/v1 is an external dependency and should be separated from internal imports.

♻️ Suggested import grouping
 import (
 	"fmt"
 	"maps"
 	"math"
 	"strings"
 
+	v1 "k8s.io/api/core/v1"
+
 	commonconstants "github.com/NVIDIA/KAI-scheduler/pkg/common/constants"
 	"github.com/NVIDIA/KAI-scheduler/pkg/scheduler/api/common_info"
 	"github.com/NVIDIA/KAI-scheduler/pkg/scheduler/api/resource_info"
 	"github.com/NVIDIA/KAI-scheduler/pkg/scheduler/log"
 	"github.com/NVIDIA/KAI-scheduler/pkg/scheduler/plugins/proportion/reclaimable/strategies"
 	rs "github.com/NVIDIA/KAI-scheduler/pkg/scheduler/plugins/proportion/resource_share"
 	"github.com/NVIDIA/KAI-scheduler/pkg/scheduler/plugins/proportion/utils"
-	v1 "k8s.io/api/core/v1"
 )

As per coding guidelines "Organize imports in three groups separated by blank lines: (1) Standard library, (2) External dependencies, (3) Internal packages".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/scheduler/plugins/proportion/reclaimable/reclaimable.go` around lines 6 -
20, Reorder the import block in reclaimable.go into three groups separated by
blank lines: (1) standard library imports (fmt, maps, math, strings), (2)
external dependencies (v1 "k8s.io/api/core/v1"), and (3) internal packages
(commonconstants, common_info, resource_info, log, strategies, rs, utils) while
preserving existing aliases (e.g., v1 and rs) and import order within each group
to satisfy the project's import grouping guideline.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@pkg/scheduler/api/common_info/pod_errors.go`:
- Around line 80-83: The code reads vectors using indices from
vectorMap.GetIndex without checking for -1; update the logic around the uses of
vectorMap.GetIndex for migProfile (and the analogous scalar/resource access
later) to first capture the index (e.g., migIdx :=
vectorMap.GetIndex(string(migProfile))) and guard with if migIdx >= 0 before
calling availableVector.Get(migIdx) or capacityVector.Get(migIdx); if the index
is < 0, return or surface an explicit error/log indicating the resource/MIG
profile is unregistered (so you don't silently treat it as zero), mirroring the
existing pattern used elsewhere in resource_vector.go.

In `@pkg/scheduler/api/node_info/node_info.go`:
- Around line 394-403: The helper lessEqualVectorsExcludingGPU mutates its input
ResourceVector values (a and b) which can race; change it to a non-mutating
comparison: get gpuIdx via ni.VectorMap.GetIndex(commonconstants.GpuResource)
and then loop over all valid indices (or use a length accessor) comparing
a.Get(i) <= b.Get(i) for each i except gpuIdx, returning false on the first
violation and true otherwise. Ensure you reference and use only
ResourceVector.Get and the VectorMap/GetIndex (no Set calls) so neither
task.ResReqVector nor node vectors are modified during the check.

In `@pkg/scheduler/plugins/proportion/capacity_policy/capacity_policy.go`:
- Around line 79-83: getRequiredQuota iterates tasksToAllocate and blindly calls
utils.QuantifyVector(pod.ResReqVector, pod.VectorMap) which will nil-deref for
pods that haven't initialized vector fields; add a defensive check in
getRequiredQuota: if pod.ResReqVector==nil || pod.VectorMap==nil then compute
quantities from the pod's scalar resource fields (e.g., CPU/milliCPU, Memory,
GPU scalar fields the pod struct exposes) and add those to quota.GPU,
quota.MilliCPU and quota.Memory, otherwise call utils.QuantifyVector as before;
update variable names (quantities, quota.GPU/MilliCPU/Memory) accordingly so
behavior is identical for vectorized pods.

In `@pkg/scheduler/plugins/proportion/proportion_test.go`:
- Around line 811-817: The node's VectorMap and AllocatableVector are being
built from a newly created vectorMap, which can diverge from the test fixtures'
testVectorMap and break vector index alignment; change the setup so
testData.node.VectorMap is assigned the existing testVectorMap (use
testVectorMap instead of a locally created vectorMap) and compute
testData.node.AllocatableVector via
testData.node.Allocatable.ToVector(testVectorMap) so getNodeResources and pod
vectors share the same ResourceVectorMap instance.

In `@pkg/scheduler/plugins/proportion/reclaimable/reclaimable.go`:
- Around line 269-289: getInvolvedResourcesNames currently only checks the
GpuResource slot; add a check for the MIG-backed GPU slot so MIG-only vectors
also mark GPU as involved: retrieve the MIG index from vectorMap (similar to
cpuIdx/memIdx/gpuIdx, e.g. migIdx :=
vectorMap.GetIndex(commonconstants.MigResource) or the project's MIG constant),
ensure the index is valid, and if vec.Get(migIdx) > 0 set
involvedResources[rs.GpuResource] = struct{}{} (keep the existing checks for
cpuIdx/memIdx/gpuIdx intact in the getInvolvedResourcesNames function).

In `@pkg/scheduler/plugins/topology/job_filtering_test.go`:
- Around line 31-32: Tests currently mix a global testVectorMap (created via
resource_info.NewResourceVectorMap()) with per-tree VectorMap instances causing
inconsistent vector/indices; replace the global testVectorMap by creating a
single VectorMap inside each test and pass that same VectorMap instance to every
topology/tree creation in that test (instead of creating new maps inside
functions), update usages referencing testVectorMap so all calls use the local
VectorMap, and remove any other per-tree NewResourceVectorMap() calls so each
test runs end-to-end with one consistent VectorMap.

In `@pkg/scheduler/plugins/topology/topology_plugin.go`:
- Around line 59-64: If nodes can be empty, ensure sharedVectorMap is
initialized to a non-nil default instead of staying nil: after the loop that
sets sharedVectorMap from nodeInfo.VectorMap, check if sharedVectorMap == nil
and, if so, create and assign a new resource_info.ResourceVectorMap (or call the
existing constructor/helper) before it is used or assigned into topology trees
(references: sharedVectorMap, nodes, resource_info.ResourceVectorMap); this
prevents nil derefs when later converting/indexing vectors.

---

Outside diff comments:
In `@pkg/scheduler/actions/common/allocate.go`:
- Around line 177-190: Update the log messages that currently print
task.ResReqVector but say "requires: ... GPUs" to accurately reflect the
payload; change the wording in the binding log (the log right before
stmt.Allocate(...) inside the Allocate/Bind function) and in
pipelineTaskToNode(...) so they mention "requires resource vector" or "requires
resources" (or similar) instead of "requires: %v GPUs", keeping the same %v
placeholder for task.ResReqVector.

---

Nitpick comments:
In
`@pkg/scheduler/actions/common/solvers/accumulated_scenario_filters/idle_gpus/idle_gpus_test.go`:
- Line 28: Replace the package-level mutable testVectorMap with per-test
instances to avoid shared mutable state: remove the var testVectorMap =
resource_info.NewResourceVectorMap() and instantiate
resource_info.NewResourceVectorMap() inside each test (or a test setup helper)
that currently references testVectorMap (search for usages of testVectorMap in
idle_gpus_test.go) so each test gets its own fresh ResourceVectorMap fixture.

In
`@pkg/scheduler/actions/common/solvers/accumulated_scenario_filters/idle_gpus/idle_gpus.go`:
- Line 154: The code hardcodes "gpu" in multiple places (e.g., in the
requiredResources append using
pod.ResReqVector.Get(pod.VectorMap.GetIndex("gpu")) and the other occurrence at
line 177); introduce a single file-level constant (e.g., const gpuVectorKey =
"gpu") and replace all hardcoded "gpu" usages with that constant (use
pod.VectorMap.GetIndex(gpuVectorKey) wherever needed) so the key is centralized
and avoids drift during vector migration.

In `@pkg/scheduler/cache/cluster_info/cluster_info.go`:
- Line 285: The code uses a hardcoded string literal "gpu" in
nodeInfo.VectorMap.GetIndex("gpu") when checking
nodeInfo.AllocatableVector.Get(...); replace that literal with the
package-level/shared GPU constant (use the existing canonical constant in this
file, e.g., GPUResourceKey) so the check reads
nodeInfo.VectorMap.GetIndex(GPUResourceKey) (keep using nodeInfo,
AllocatableVector.Get and VectorMap.GetIndex as-is).

In `@pkg/scheduler/plugins/proportion/proportion_test.go`:
- Around line 623-627: The test fixtures duplicate parsing by calling
common_info.BuildResourceRequirements twice for ResReq and ResReqVector; fix by
creating a single requirement variable (e.g., req :=
common_info.BuildResourceRequirements("2", "2G")) and reuse it for both fields
so ResReq: req and ResReqVector: req.ToVector(testVectorMap); apply the same
refactor to all similar blocks that build resources (lines referenced in the
comment) to avoid fixture drift and keep testVectorMap usage consistent.

In `@pkg/scheduler/plugins/proportion/proportion.go`:
- Around line 221-239: The getResources function currently builds an
intermediate slice then sums it; instead, iterate once over pods, skip tasks
where ignoreReallocatedTasks && pod_status.IsActiveAllocatedStatus(task.Status)
and also skip when task.AcceptedResourceVector is nil, and accumulate directly
into a single resource_info.ResourceVector named total by setting total =
task.AcceptedResourceVector.Clone() for the first non-nil vector and
total.Add(vec) for subsequent ones; if no non-nil vectors are seen return nil.
Use the existing function name getResources, types resource_info.ResourceVector
and pod_info.PodInfo, and the AcceptedResourceVector field/Clone/Add methods
referenced in the diff.

In `@pkg/scheduler/plugins/proportion/reclaimable/reclaimable.go`:
- Around line 6-20: Reorder the import block in reclaimable.go into three groups
separated by blank lines: (1) standard library imports (fmt, maps, math,
strings), (2) external dependencies (v1 "k8s.io/api/core/v1"), and (3) internal
packages (commonconstants, common_info, resource_info, log, strategies, rs,
utils) while preserving existing aliases (e.g., v1 and rs) and import order
within each group to satisfy the project's import grouping guideline.

In `@pkg/scheduler/plugins/proportion/reclaimable/strategies/strategies.go`:
- Around line 18-24: Add a GoDoc comment for the exported function
FitsReclaimStrategy that briefly describes what the function does and explains
its parameters and return value: mention that it determines whether a reclaimer
(reclaimerResources) can reclaim resources from a reclaimee given the resource
vector map (vectorMap), queue attributes for reclaimer and reclaimee
(reclaimerQueue, reclaimeeQueue) and the reclaimee's remaining share
(reclaimeeRemainingShare), and that it returns a bool indicating whether the
reclaim fits; place the comment immediately above the FitsReclaimStrategy
declaration and follow GoDoc conventions (starts with the function name and is a
complete sentence).

In `@pkg/scheduler/plugins/proportion/utils/utils.go`:
- Line 24: QuantifyVector is missing a GoDoc comment; add a GoDoc-style comment
directly above the exported function QuantifyVector that begins with
"QuantifyVector ..." and briefly describes what the function does, its
parameters (vec and vectorMap) and the returned rs.ResourceQuantities to satisfy
the project's documentation guidelines.

In `@pkg/scheduler/plugins/topology/job_filtering.go`:
- Around line 113-124: The aggregation logic that builds a combined
ResourceVector and task count is duplicated in getTasksAllocationMetadata and in
sortTreeFromRoot; extract it into a single helper (e.g., aggregateTasksResources
or computeTasksResourceSummary) that accepts []*pod_info.PodInfo and returns
(resource_info.ResourceVector, int), move the loop that clones the first
task.ResReqVector and adds subsequent vectors into that helper, and replace the
bodies of getTasksAllocationMetadata and the code in sortTreeFromRoot (and the
duplicate at lines ~431-440) to call the new helper to obtain tasksResources and
tasksCount.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 3560b2aa-a32c-4a45-a10b-4d963473d8f0

📥 Commits

Reviewing files that changed from the base of the PR and between 49dd391 and 4a7b05d.

📒 Files selected for processing (38)
  • pkg/scheduler/actions/common/allocate.go
  • pkg/scheduler/actions/common/feasible_nodes_test.go
  • pkg/scheduler/actions/common/solvers/accumulated_scenario_filters/idle_gpus/idle_gpus.go
  • pkg/scheduler/actions/common/solvers/accumulated_scenario_filters/idle_gpus/idle_gpus_test.go
  • pkg/scheduler/actions/common/solvers/pod_scenario_builder_test.go
  • pkg/scheduler/actions/utils/job_order_by_queue_test.go
  • pkg/scheduler/api/common_info/job_errors.go
  • pkg/scheduler/api/common_info/job_errors_test.go
  • pkg/scheduler/api/common_info/pod_errors.go
  • pkg/scheduler/api/common_info/pod_errors_test.go
  • pkg/scheduler/api/node_info/gpu_sharing_node_info.go
  • pkg/scheduler/api/node_info/node_info.go
  • pkg/scheduler/api/node_info/node_info_test.go
  • pkg/scheduler/api/pod_info/pod_info.go
  • pkg/scheduler/cache/cluster_info/cluster_info.go
  • pkg/scheduler/framework/session.go
  • pkg/scheduler/framework/statement.go
  • pkg/scheduler/plugins/nodeavailability/nodeavailability.go
  • pkg/scheduler/plugins/nodeplacement/nodepack_test.go
  • pkg/scheduler/plugins/nodeplacement/nodespread_test.go
  • pkg/scheduler/plugins/nodeplacement/pack.go
  • pkg/scheduler/plugins/nodeplacement/spread.go
  • pkg/scheduler/plugins/proportion/capacity_policy/capacity_policy.go
  • pkg/scheduler/plugins/proportion/capacity_policy/capacity_policy_test.go
  • pkg/scheduler/plugins/proportion/proportion.go
  • pkg/scheduler/plugins/proportion/proportion_test.go
  • pkg/scheduler/plugins/proportion/reclaimable/reclaimable.go
  • pkg/scheduler/plugins/proportion/reclaimable/reclaimable_test.go
  • pkg/scheduler/plugins/proportion/reclaimable/reclaimer_info.go
  • pkg/scheduler/plugins/proportion/reclaimable/strategies/strategies.go
  • pkg/scheduler/plugins/proportion/reclaimable/strategies/strategies_test.go
  • pkg/scheduler/plugins/proportion/utils/utils.go
  • pkg/scheduler/plugins/resourcetype/resourcetype.go
  • pkg/scheduler/plugins/topology/job_filtering.go
  • pkg/scheduler/plugins/topology/job_filtering_test.go
  • pkg/scheduler/plugins/topology/node_scoring_test.go
  • pkg/scheduler/plugins/topology/topology_plugin.go
  • pkg/scheduler/plugins/topology/topology_structs.go

Comment on lines +80 to 83
migIdx := vectorMap.GetIndex(string(migProfile))
availableMigProfilesQuant := int64(availableVector.Get(migIdx))
capacityMigProfilesQuant := int64(capacityVector.Get(migIdx))
if availableMigProfilesQuant < quant {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "=== Finding ResourceVectorMap/Get implementation ==="
fd resource_vector.go -t f 2>/dev/null || echo "File not found, searching in pkg/scheduler"
find . -name "resource_vector.go" -type f 2>/dev/null | head -5

echo
echo "=== Finding pod_errors.go ==="
find . -name "pod_errors.go" -type f 2>/dev/null | head -5

Repository: NVIDIA/KAI-Scheduler

Length of output: 297


🏁 Script executed:

#!/bin/bash
# Let's search for the resource_vector.go file and examine GetIndex and Get implementations
rg -l "ResourceVectorMap|ResourceVector" --type go | head -10

Repository: NVIDIA/KAI-Scheduler

Length of output: 590


🏁 Script executed:

#!/bin/bash
# Search for GetIndex and Get method implementations
rg -n "func.*GetIndex\(" --type go -A 5
rg -n "func.*\.Get\(" --type go -A 5 | head -50

Repository: NVIDIA/KAI-Scheduler

Length of output: 1284


🏁 Script executed:

#!/bin/bash
# Read the actual pod_errors.go file to see lines 80-83 and 137-145
cat pkg/scheduler/api/common_info/pod_errors.go | head -150

Repository: NVIDIA/KAI-Scheduler

Length of output: 5664


🏁 Script executed:

#!/bin/bash
# Let's find the ResourceVector.Get implementation
rg -n "func.*ResourceVector.*Get" --type go -A 10 | grep -A 10 "func (.*ResourceVector) Get"

Repository: NVIDIA/KAI-Scheduler

Length of output: 899


🏁 Script executed:

#!/bin/bash
# Check what Get returns for invalid index
rg -n "Get\(" pkg/scheduler/api/resource_info/resource_vector.go -B 2 -A 10 | head -60

Repository: NVIDIA/KAI-Scheduler

Length of output: 2240


🏁 Script executed:

#!/bin/bash
# Examine the ResourceVector type and Get method more carefully
sed -n '1,300p' pkg/scheduler/api/resource_info/resource_vector.go | tail -150

Repository: NVIDIA/KAI-Scheduler

Length of output: 4369


🏁 Script executed:

#!/bin/bash
# Check how GetIndex followed by Get is used elsewhere in the codebase
rg -n "GetIndex.*\(" --type go -A 3 | grep -A 3 "\.Get(" | head -100

Repository: NVIDIA/KAI-Scheduler

Length of output: 7695


🏁 Script executed:

#!/bin/bash
# Check for explicit index guard patterns before Get calls
rg -n "if.*GetIndex.*>=" --type go -B 2 -A 2

Repository: NVIDIA/KAI-Scheduler

Length of output: 1298


Add guards before reading missing resource indices from vectors.

GetIndex returns -1 for unregistered resources, and while Get(-1) safely returns 0 (no panic risk), this masks resource accounting errors. When a MIG profile or scalar resource is requested but absent from the vectorMap, the code silently treats it as 0 available, which can produce incorrect error messages.

This pattern is inconsistent with other vector access in resource_vector.go (lines 211–212, 233–234, 238–240), where explicit if idx >= 0 guards precede vector operations. For consistency and to catch misconfigurations early, add similar guards at lines 80 and 137.

Suggested hardening pattern
-			migIdx := vectorMap.GetIndex(string(migProfile))
-			availableMigProfilesQuant := int64(availableVector.Get(migIdx))
-			capacityMigProfilesQuant := int64(capacityVector.Get(migIdx))
+			migIdx := vectorMap.GetIndex(string(migProfile))
+			if migIdx >= 0 {
+				availableMigProfilesQuant := int64(availableVector.Get(migIdx))
+				capacityMigProfilesQuant := int64(capacityVector.Get(migIdx))
 			if availableMigProfilesQuant < quant {
-		scalarIdx := vectorMap.GetIndex(string(requestedResourceName))
-		availableResourceQuant := int64(availableVector.Get(scalarIdx))
-		capacityResourceQuant := int64(capacityVector.Get(scalarIdx))
+		scalarIdx := vectorMap.GetIndex(string(requestedResourceName))
+		if scalarIdx >= 0 {
+			availableResourceQuant := int64(availableVector.Get(scalarIdx))
+			capacityResourceQuant := int64(capacityVector.Get(scalarIdx))
 		if availableResourceQuant < requestedResourceQuant {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/scheduler/api/common_info/pod_errors.go` around lines 80 - 83, The code
reads vectors using indices from vectorMap.GetIndex without checking for -1;
update the logic around the uses of vectorMap.GetIndex for migProfile (and the
analogous scalar/resource access later) to first capture the index (e.g., migIdx
:= vectorMap.GetIndex(string(migProfile))) and guard with if migIdx >= 0 before
calling availableVector.Get(migIdx) or capacityVector.Get(migIdx); if the index
is < 0, return or surface an explicit error/log indicating the resource/MIG
profile is unregistered (so you don't silently treat it as zero), mirroring the
existing pattern used elsewhere in resource_vector.go.

Comment on lines +394 to +403
func (ni *NodeInfo) lessEqualVectorsExcludingGPU(a, b resource_info.ResourceVector) bool {
gpuIdx := ni.VectorMap.GetIndex(commonconstants.GpuResource)
savedA := a.Get(gpuIdx)
savedB := b.Get(gpuIdx)
a.Set(gpuIdx, 0)
b.Set(gpuIdx, 0)
result := a.LessEqual(b)
a.Set(gpuIdx, savedA)
b.Set(gpuIdx, savedB)
return result
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Avoid mutating shared vectors in resource comparisons.

This helper temporarily writes into both inputs (a and b). That mutates task.ResReqVector and node vectors during feasibility checks and can race under concurrent node evaluation.

Proposed non-mutating fix
 func (ni *NodeInfo) lessEqualVectorsExcludingGPU(a, b resource_info.ResourceVector) bool {
 	gpuIdx := ni.VectorMap.GetIndex(commonconstants.GpuResource)
-	savedA := a.Get(gpuIdx)
-	savedB := b.Get(gpuIdx)
-	a.Set(gpuIdx, 0)
-	b.Set(gpuIdx, 0)
-	result := a.LessEqual(b)
-	a.Set(gpuIdx, savedA)
-	b.Set(gpuIdx, savedB)
-	return result
+	if len(a) != len(b) {
+		return false
+	}
+	for i := 0; i < len(a); i++ {
+		if i == gpuIdx {
+			continue
+		}
+		if a.Get(i) > b.Get(i) {
+			return false
+		}
+	}
+	return true
 }
Based on learnings: In docs/developer/designs/vectorizing-resources/README.md, “simultaneously” refers to goroutine-based concurrent execution where different goroutines process different nodes.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/scheduler/api/node_info/node_info.go` around lines 394 - 403, The helper
lessEqualVectorsExcludingGPU mutates its input ResourceVector values (a and b)
which can race; change it to a non-mutating comparison: get gpuIdx via
ni.VectorMap.GetIndex(commonconstants.GpuResource) and then loop over all valid
indices (or use a length accessor) comparing a.Get(i) <= b.Get(i) for each i
except gpuIdx, returning false on the first violation and true otherwise. Ensure
you reference and use only ResourceVector.Get and the VectorMap/GetIndex (no Set
calls) so neither task.ResReqVector nor node vectors are modified during the
check.

Comment on lines 79 to +83
for _, pod := range tasksToAllocate {
quota.GPU += pod.ResReq.GetGpusQuota()
quota.MilliCPU += pod.ResReq.Cpu()
quota.Memory += pod.ResReq.Memory()
quantities := utils.QuantifyVector(pod.ResReqVector, pod.VectorMap)
quota.GPU += quantities[rs.GpuResource]
quota.MilliCPU += quantities[rs.CpuResource]
quota.Memory += quantities[rs.MemoryResource]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Add fallback for pods without vector fields initialized.

getRequiredQuota now assumes ResReqVector and VectorMap are always present. Add a scalar fallback to avoid nil dereference / bad quota in transitional paths.

🛠️ Suggested defensive fallback
 func getRequiredQuota(tasksToAllocate []*pod_info.PodInfo) *podgroup_info.JobRequirement {
 	quota := podgroup_info.JobRequirement{}
 	for _, pod := range tasksToAllocate {
-		quantities := utils.QuantifyVector(pod.ResReqVector, pod.VectorMap)
+		var quantities rs.ResourceQuantities
+		if pod.VectorMap == nil || pod.ResReqVector == nil {
+			if pod.ResReq == nil {
+				continue
+			}
+			quantities = utils.QuantifyResourceRequirements(pod.ResReq)
+		} else {
+			quantities = utils.QuantifyVector(pod.ResReqVector, pod.VectorMap)
+		}
 		quota.GPU += quantities[rs.GpuResource]
 		quota.MilliCPU += quantities[rs.CpuResource]
 		quota.Memory += quantities[rs.MemoryResource]
 	}
 	return &quota
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/scheduler/plugins/proportion/capacity_policy/capacity_policy.go` around
lines 79 - 83, getRequiredQuota iterates tasksToAllocate and blindly calls
utils.QuantifyVector(pod.ResReqVector, pod.VectorMap) which will nil-deref for
pods that haven't initialized vector fields; add a defensive check in
getRequiredQuota: if pod.ResReqVector==nil || pod.VectorMap==nil then compute
quantities from the pod's scalar resource fields (e.g., CPU/milliCPU, Memory,
GPU scalar fields the pod struct exposes) and add those to quota.GPU,
quota.MilliCPU and quota.Memory, otherwise call utils.QuantifyVector as before;
update variable names (quantities, quota.GPU/MilliCPU/Memory) accordingly so
behavior is identical for vectorized pods.

Comment on lines +811 to 817
vectorMap := resource_info.NewResourceVectorMap()
for rName := range testData.node.Allocatable.ScalarResources() {
vectorMap.AddResource(string(rName))
}
testData.node.VectorMap = vectorMap
testData.node.AllocatableVector = testData.node.Allocatable.ToVector(vectorMap)
if got := getNodeResources(session, testData.node); !reflect.DeepEqual(got, testData.want) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Ensure node and pod vectors use the same ResourceVectorMap instance.

In Line 811-Line 817, testData.node.VectorMap is local, while several pod fixtures in the same tests use testVectorMap. This can silently misalign indices if either map gains additional resources, making vector math assertions brittle.

💡 Suggested fix
 vectorMap := resource_info.NewResourceVectorMap()
 for rName := range testData.node.Allocatable.ScalarResources() {
 	vectorMap.AddResource(string(rName))
 }
 testData.node.VectorMap = vectorMap
 testData.node.AllocatableVector = testData.node.Allocatable.ToVector(vectorMap)
+for _, p := range testData.node.PodInfos {
+	if p != nil && p.ResReq != nil {
+		p.VectorMap = vectorMap
+		p.ResReqVector = p.ResReq.ToVector(vectorMap)
+	}
+}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/scheduler/plugins/proportion/proportion_test.go` around lines 811 - 817,
The node's VectorMap and AllocatableVector are being built from a newly created
vectorMap, which can diverge from the test fixtures' testVectorMap and break
vector index alignment; change the setup so testData.node.VectorMap is assigned
the existing testVectorMap (use testVectorMap instead of a locally created
vectorMap) and compute testData.node.AllocatableVector via
testData.node.Allocatable.ToVector(testVectorMap) so getNodeResources and pod
vectors share the same ResourceVectorMap instance.

Comment on lines +269 to 289
func getInvolvedResourcesNames(resources []resource_info.ResourceVector, vectorMap *resource_info.ResourceVectorMap) map[rs.ResourceName]any {
involvedResources := map[rs.ResourceName]any{}
for _, resource := range resources {
if resource == nil {
cpuIdx := vectorMap.GetIndex(string(v1.ResourceCPU))
memIdx := vectorMap.GetIndex(string(v1.ResourceMemory))
gpuIdx := vectorMap.GetIndex(commonconstants.GpuResource)

for _, vec := range resources {
if vec == nil {
continue
}

if resource.Cpu() > 0 {
if vec.Get(cpuIdx) > 0 {
involvedResources[rs.CpuResource] = struct{}{}
}

if resource.Memory() > 0 {
if vec.Get(memIdx) > 0 {
involvedResources[rs.MemoryResource] = struct{}{}
}

if resource.GPUs() > 0 {
if vec.Get(gpuIdx) > 0 {
involvedResources[rs.GpuResource] = struct{}{}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Include MIG-backed GPU demand when collecting involved resources.

getInvolvedResourcesNames only marks GPU involvement from the GpuResource slot. MIG-only vectors will not mark GPU as involved, so GPU saturation checks can be skipped even though utils.QuantifyVector counts MIG toward GPU quota.

🐛 Suggested fix
 func getInvolvedResourcesNames(resources []resource_info.ResourceVector, vectorMap *resource_info.ResourceVectorMap) map[rs.ResourceName]any {
 	involvedResources := map[rs.ResourceName]any{}
-	cpuIdx := vectorMap.GetIndex(string(v1.ResourceCPU))
-	memIdx := vectorMap.GetIndex(string(v1.ResourceMemory))
-	gpuIdx := vectorMap.GetIndex(commonconstants.GpuResource)
 
 	for _, vec := range resources {
 		if vec == nil {
 			continue
 		}
 
-		if vec.Get(cpuIdx) > 0 {
+		quantified := utils.QuantifyVector(vec, vectorMap)
+		if quantified[rs.CpuResource] > 0 {
 			involvedResources[rs.CpuResource] = struct{}{}
 		}
 
-		if vec.Get(memIdx) > 0 {
+		if quantified[rs.MemoryResource] > 0 {
 			involvedResources[rs.MemoryResource] = struct{}{}
 		}
 
-		if vec.Get(gpuIdx) > 0 {
+		if quantified[rs.GpuResource] > 0 {
 			involvedResources[rs.GpuResource] = struct{}{}
 		}
 	}
 
 	return involvedResources
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/scheduler/plugins/proportion/reclaimable/reclaimable.go` around lines 269
- 289, getInvolvedResourcesNames currently only checks the GpuResource slot; add
a check for the MIG-backed GPU slot so MIG-only vectors also mark GPU as
involved: retrieve the MIG index from vectorMap (similar to
cpuIdx/memIdx/gpuIdx, e.g. migIdx :=
vectorMap.GetIndex(commonconstants.MigResource) or the project's MIG constant),
ensure the index is valid, and if vec.Get(migIdx) > 0 set
involvedResources[rs.GpuResource] = struct{}{} (keep the existing checks for
cpuIdx/memIdx/gpuIdx intact in the getInvolvedResourcesNames function).

Comment on lines +31 to +32
var testVectorMap = resource_info.NewResourceVectorMap()

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Use a single VectorMap per test topology end-to-end.

This file mixes a global testVectorMap with per-tree VectorMap instances. That can make vectors and indices inconsistent, creating order-dependent/false-positive tests.

✅ Suggested direction
-var testVectorMap = resource_info.NewResourceVectorMap()
+func newTestVectorMap() *resource_info.ResourceVectorMap {
+	return resource_info.NewResourceVectorMap()
+}
-			setupTopologyTree: func() *Info {
+			setupTopologyTree: func() *Info {
+				vm := newTestVectorMap()
 				tree := &Info{
-					Name:      "test-topology",
-					VectorMap: resource_info.NewResourceVectorMap(),
+					Name:      "test-topology",
+					VectorMap: vm,
 ...
-								IdleOrReleasingVector: resource_info.NewResource(0, 0, 0).ToVector(testVectorMap),
+								IdleOrReleasingVector: resource_info.NewResource(0, 0, 0).ToVector(vm),
-				tt.job.RootSubGroupSet.GetAllPodSets(), tasksResources.ToVector(testVectorMap), tasksCount,
+				tt.job.RootSubGroupSet.GetAllPodSets(), tasksResources.ToVector(tt.topologyTree.VectorMap), tasksCount,

Also applies to: 92-93, 107-108, 2126-2126

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/scheduler/plugins/topology/job_filtering_test.go` around lines 31 - 32,
Tests currently mix a global testVectorMap (created via
resource_info.NewResourceVectorMap()) with per-tree VectorMap instances causing
inconsistent vector/indices; replace the global testVectorMap by creating a
single VectorMap inside each test and pass that same VectorMap instance to every
topology/tree creation in that test (instead of creating new maps inside
functions), update usages referencing testVectorMap so all calls use the local
VectorMap, and remove any other per-tree NewResourceVectorMap() calls so each
test runs end-to-end with one consistent VectorMap.

Comment on lines +59 to +64
// Get VectorMap from any node (all share the same map)
var sharedVectorMap *resource_info.ResourceVectorMap
for _, nodeInfo := range nodes {
sharedVectorMap = nodeInfo.VectorMap
break
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Ensure VectorMap is never nil when no nodes exist.

If nodes is empty, sharedVectorMap stays nil and gets assigned into every topology tree. Any later vector conversion/indexing against that map can fail at runtime.

🛠️ Proposed fix
 func (t *topologyPlugin) initializeTopologyTree(topologies []*kaiv1alpha1.Topology, nodes map[string]*node_info.NodeInfo) {
-	// Get VectorMap from any node (all share the same map)
-	var sharedVectorMap *resource_info.ResourceVectorMap
+	sharedVectorMap := resource_info.NewResourceVectorMap()
 	for _, nodeInfo := range nodes {
-		sharedVectorMap = nodeInfo.VectorMap
-		break
+		if nodeInfo != nil && nodeInfo.VectorMap != nil {
+			sharedVectorMap = nodeInfo.VectorMap
+			break
+		}
 	}

Also applies to: 75-75

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/scheduler/plugins/topology/topology_plugin.go` around lines 59 - 64, If
nodes can be empty, ensure sharedVectorMap is initialized to a non-nil default
instead of staying nil: after the loop that sets sharedVectorMap from
nodeInfo.VectorMap, check if sharedVectorMap == nil and, if so, create and
assign a new resource_info.ResourceVectorMap (or call the existing
constructor/helper) before it is used or assigned into topology trees
(references: sharedVectorMap, nodes, resource_info.ResourceVectorMap); this
prevents nil derefs when later converting/indexing vectors.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant