Skip to content

[Bugfix] [DeepSeek-V3.2] fix sparse_attn_indexer weights padding#35277

Open
kebe7jun wants to merge 3 commits intovllm-project:mainfrom
kebe7jun:fix/sparse-attn-padding
Open

[Bugfix] [DeepSeek-V3.2] fix sparse_attn_indexer weights padding#35277
kebe7jun wants to merge 3 commits intovllm-project:mainfrom
kebe7jun:fix/sparse-attn-padding

Conversation

@kebe7jun
Copy link
Contributor

@kebe7jun kebe7jun commented Feb 25, 2026

Purpose

This PR #29287 does not include #32175, so, it will cause batch_size_next_n == batch_size * next_n

cc @ganyi1996ppo

Test Plan

# Prefill
vllm serve /gpfs/rd/models/DeepSeek-V3.2 -tp=2 -dp 4 --trust-remote-code --enable-expert-parallel --all2all-backend=deepep_high_throughput --gpu_memory_utilization=0.9 --max-model-len 102400 --tokenizer-mode=deepseek_v32  --enable-eplb --eplb-config '{"window_size":"32","step_interval":"32","num_redundant_experts":"8", "async": "True"}'                 --kv-transfer-config '{"kv_connector":"NixlConnector","kv_role":"kv_both"}' --num-gpu-blocks-override 2000 --speculative-config.method=mtp --speculative-config.num_speculative_tokens=1

# Decode
vllm serve /gpfs/rd/models/DeepSeek-V3.2 -tp=2 --trust-remote-code --enable-expert-parallel --all2all-backend=deepep_low_latency --gpu_memory_utilization=0.9 --max-model-len 102400 --tokenizer-mode=deepseek_v32  --enable-eplb       --kv-transfer-config '{"kv_connector":"NixlConnector","kv_role":"kv_both"}' -dp 4 --speculative-config.method=mtp --speculative-config.num_speculative_tokens=1 --num-gpu-blocks-override 2000

# Router
python /gpfs/rd/kebe/vllm/tests/v1/kv_connector/nixl_integration/toy_proxy_server.py --prefiller-hosts <PREFILLER_IP> --decoder-host <DECODER_IP> --prefiller-ports 8000 --decoder-ports 8000 --port 8001

and run bench:

vllm bench serve --port 8001 --random-input-len 4096 --random-output-len 200 --num-prompts 500 --max-concurrency 128 --seed $RANDOM  --model /gpfs/rd/models/DeepSeek-V3.2

Test Result

vllm bench ok.


Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft in the Google Doc.

@mergify mergify bot added deepseek Related to DeepSeek models bug Something isn't working labels Feb 25, 2026
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request correctly addresses a bug in the sparse_attn_indexer by ensuring the weights tensor is padded consistently with the q_fp8 tensor during the decode phase. The changes in vllm/model_executor/layers/sparse_attn_indexer.py properly handle both cases where padding is required and not required, preventing potential shape mismatches and incorrect behavior. The fix appears correct and complete.

@kebe7jun
Copy link
Contributor Author

@LucasWilkinson PTAL.

@LucasWilkinson
Copy link
Collaborator

@kebe7jun thanks for the contribution! https://github.com/vllm-project/vllm/pull/34552/changes#r2855124116 is actually close to landing which should eliminate the need for padding, will hold off to see if we can land that in a couple days

@mergify
Copy link

mergify bot commented Mar 1, 2026

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @kebe7jun.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify bot added the needs-rebase label Mar 1, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bug Something isn't working deepseek Related to DeepSeek models needs-rebase

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants