Update Qwen3-Coder-480B-A35B.md for AMD#222
Conversation
Signed-off-by: haic0 <149741444+haic0@users.noreply.github.com>
Summary of ChangesHello @haic0, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces detailed instructions for deploying and benchmarking the Qwen3-Coder-480B-A35B model on AMD GPUs. The added guide covers the necessary steps from environment setup using Docker to running the vLLM server with different precision settings and executing performance benchmarks, aiming to facilitate optimal utilization of AMD hardware. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request adds documentation for running Qwen3-Coder on AMD GPUs. The changes are a good addition, but there are several areas that could be improved for clarity and correctness. I've identified some redundant text, confusing instructions in the steps for starting the server, and a potentially misleading benchmark command. My review comments provide specific suggestions to address these points and make the guide easier for users to follow.
|
|
||
| MI300X/MI325X/MI355X with fp8: Use FP8 checkpoint for optimal memory efficiency. | ||
|
|
Qwen/Qwen3-Coder-480B-A35B.md
Outdated
| ### Step 3: Start the vLLM server | ||
|
|
||
| Run the vllm online serving | ||
| ```shell | ||
| docker exec -it Qwen3-Coder /bin/bash | ||
| ``` | ||
|
|
||
| ### BF16 | ||
|
|
||
|
|
||
| ```shell | ||
| VLLM_ROCM_USE_AITER=1 vllm serve Qwen/Qwen3-Coder-480B-A35B-Instruct --trust-remote-code --max-model-len 131072 --enable-expert-parallel --data-parallel-size 8 --enable-auto-tool-choice --tool-call-parser qwen3_coder | ||
| ``` | ||
|
|
||
| ### FP8 | ||
|
|
||
| ```shell | ||
|
|
||
| VLLM_ROCM_USE_AITER=1 vllm serve Qwen/Qwen3-Coder-480B-A35B-Instruct-FP8 --trust-remote-code --max-model-len 131072 --enable-expert-parallel --data-parallel-size 8 --enable-auto-tool-choice --tool-call-parser qwen3_coder | ||
|
|
||
| ``` |
There was a problem hiding this comment.
The structure of 'Step 3' is confusing. The heading suggests starting the server, but the command only enters the container. The actual server commands are under separate, incorrectly leveled headings. This can be restructured for clarity by explaining the steps sequentially and using subheadings for the different precision options.
| ### Step 3: Start the vLLM server | |
| Run the vllm online serving | |
| ```shell | |
| docker exec -it Qwen3-Coder /bin/bash | |
| ``` | |
| ### BF16 | |
| ```shell | |
| VLLM_ROCM_USE_AITER=1 vllm serve Qwen/Qwen3-Coder-480B-A35B-Instruct --trust-remote-code --max-model-len 131072 --enable-expert-parallel --data-parallel-size 8 --enable-auto-tool-choice --tool-call-parser qwen3_coder | |
| ``` | |
| ### FP8 | |
| ```shell | |
| VLLM_ROCM_USE_AITER=1 vllm serve Qwen/Qwen3-Coder-480B-A35B-Instruct-FP8 --trust-remote-code --max-model-len 131072 --enable-expert-parallel --data-parallel-size 8 --enable-auto-tool-choice --tool-call-parser qwen3_coder | |
| ``` | |
| ### Step 3: Start the vLLM server | |
| First, enter the running container with an interactive shell: | |
| ```shell | |
| docker exec -it Qwen3-Coder /bin/bash |
Now, from inside the container, you can start the vLLM server. Choose one of the following commands based on your desired precision.
BF16
VLLM_ROCM_USE_AITER=1 vllm serve Qwen/Qwen3-Coder-480B-A35B-Instruct --trust-remote-code --max-model-len 131072 --enable-expert-parallel --data-parallel-size 8 --enable-auto-tool-choice --tool-call-parser qwen3_coderFP8
VLLM_ROCM_USE_AITER=1 vllm serve Qwen/Qwen3-Coder-480B-A35B-Instruct-FP8 --trust-remote-code --max-model-len 131072 --enable-expert-parallel --data-parallel-size 8 --enable-auto-tool-choice --tool-call-parser qwen3_coder| ### Step 4: Run Benchmark | ||
| Open a new terminal and run the following command to execute the benchmark script inside the container. | ||
| ```shell | ||
| docker exec -it Qwen3-Coder vllm bench serve \ | ||
| --model "Qwen/Qwen3-Coder-480B-A35B-Instruct" \ | ||
| --dataset-name random \ | ||
| --random-input-len 8192 \ | ||
| --random-output-len 1024 \ | ||
| --request-rate 10000 \ | ||
| --num-prompts 16 \ | ||
| --ignore-eos \ | ||
| --trust-remote-code | ||
| ``` |
There was a problem hiding this comment.
The purpose of Step 4 is unclear in relation to Step 3. After starting a server in Step 3, this step describes running what appears to be an offline benchmark, which wouldn't use the running server. To avoid confusion, please clarify if the intention is to benchmark the running server (in which case a client benchmark tool should be used) or to run a separate offline benchmark.
|
@Isotr0py Could you help review the PR, thanks so much! |
|
|
||
|
|
||
| ### Step 4: Run Benchmark | ||
| Open a new terminal and run the following command to execute the benchmark script inside the container. |
There was a problem hiding this comment.
@haic0 hi, must we use this other instruction? is it fine to use the benchmark instruction specify for the NVIDIA?
Because we should try to group the commands together under the command section like this PR #202
and this page
https://github.com/vllm-project/recipes/blob/main/InternLM/Intern-S1.md
|
@tjtanaa Thanks, i have changed the bench command to make it same as NV. Pls check, Tks. |
|
|
||
|
|
||
| ```shell | ||
| VLLM_ROCM_USE_AITER=1 vllm serve Qwen/Qwen3-Coder-480B-A35B-Instruct --trust-remote-code --max-model-len 131072 --enable-expert-parallel --data-parallel-size 8 --enable-auto-tool-choice --tool-call-parser qwen3_coder |
There was a problem hiding this comment.
Let's make it multiline command and be consistent with existing format.
|
|
||
| ```shell | ||
|
|
||
| VLLM_ROCM_USE_AITER=1 vllm serve Qwen/Qwen3-Coder-480B-A35B-Instruct-FP8 --trust-remote-code --max-model-len 131072 --enable-expert-parallel --data-parallel-size 8 --enable-auto-tool-choice --tool-call-parser qwen3_coder |
There was a problem hiding this comment.
Let's make it multiline command and be consistent with existing format.
There was a problem hiding this comment.
@haic0 also address this. Make sure in all your PR also setup the command as multiline for better readability.
|
|
||
|
|
||
| ## AMD GPU Support | ||
| Recommended approaches by hardware type are: |
There was a problem hiding this comment.
@haic0 Let's merge all the command together under the same header as CUDA. We shouldn't create whole new section. You can refer to this PR.
No description provided.