Skip to content

Update Qwen3-Coder-480B-A35B.md for AMD#222

Open
haic0 wants to merge 8 commits intovllm-project:mainfrom
haic0:patch-23
Open

Update Qwen3-Coder-480B-A35B.md for AMD#222
haic0 wants to merge 8 commits intovllm-project:mainfrom
haic0:patch-23

Conversation

@haic0
Copy link
Contributor

@haic0 haic0 commented Jan 29, 2026

No description provided.

Signed-off-by: haic0 <149741444+haic0@users.noreply.github.com>
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @haic0, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces detailed instructions for deploying and benchmarking the Qwen3-Coder-480B-A35B model on AMD GPUs. The added guide covers the necessary steps from environment setup using Docker to running the vLLM server with different precision settings and executing performance benchmarks, aiming to facilitate optimal utilization of AMD hardware.

Highlights

  • AMD GPU Support: Added a comprehensive guide for running Qwen3-Coder models on AMD MI300X/MI325X/MI355X GPUs, including Docker setup, Hugging Face login, vLLM server startup (BF16 and FP8), and benchmarking instructions.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds documentation for running Qwen3-Coder on AMD GPUs. The changes are a good addition, but there are several areas that could be improved for clarity and correctness. I've identified some redundant text, confusing instructions in the steps for starting the server, and a potentially misleading benchmark command. My review comments provide specific suggestions to address these points and make the guide easier for users to follow.

Comment on lines +140 to +142

MI300X/MI325X/MI355X with fp8: Use FP8 checkpoint for optimal memory efficiency.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This section contains a redundant sentence and extra blank lines. The sentence on line 141 is a duplicate of the information in the following list item. Removing it and the surrounding blank lines will make the document cleaner.

Comment on lines +164 to +184
### Step 3: Start the vLLM server

Run the vllm online serving
```shell
docker exec -it Qwen3-Coder /bin/bash
```

### BF16


```shell
VLLM_ROCM_USE_AITER=1 vllm serve Qwen/Qwen3-Coder-480B-A35B-Instruct --trust-remote-code --max-model-len 131072 --enable-expert-parallel --data-parallel-size 8 --enable-auto-tool-choice --tool-call-parser qwen3_coder
```

### FP8

```shell

VLLM_ROCM_USE_AITER=1 vllm serve Qwen/Qwen3-Coder-480B-A35B-Instruct-FP8 --trust-remote-code --max-model-len 131072 --enable-expert-parallel --data-parallel-size 8 --enable-auto-tool-choice --tool-call-parser qwen3_coder

```
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The structure of 'Step 3' is confusing. The heading suggests starting the server, but the command only enters the container. The actual server commands are under separate, incorrectly leveled headings. This can be restructured for clarity by explaining the steps sequentially and using subheadings for the different precision options.

Suggested change
### Step 3: Start the vLLM server
Run the vllm online serving
```shell
docker exec -it Qwen3-Coder /bin/bash
```
### BF16
```shell
VLLM_ROCM_USE_AITER=1 vllm serve Qwen/Qwen3-Coder-480B-A35B-Instruct --trust-remote-code --max-model-len 131072 --enable-expert-parallel --data-parallel-size 8 --enable-auto-tool-choice --tool-call-parser qwen3_coder
```
### FP8
```shell
VLLM_ROCM_USE_AITER=1 vllm serve Qwen/Qwen3-Coder-480B-A35B-Instruct-FP8 --trust-remote-code --max-model-len 131072 --enable-expert-parallel --data-parallel-size 8 --enable-auto-tool-choice --tool-call-parser qwen3_coder
```
### Step 3: Start the vLLM server
First, enter the running container with an interactive shell:
```shell
docker exec -it Qwen3-Coder /bin/bash

Now, from inside the container, you can start the vLLM server. Choose one of the following commands based on your desired precision.

BF16

VLLM_ROCM_USE_AITER=1 vllm serve Qwen/Qwen3-Coder-480B-A35B-Instruct --trust-remote-code --max-model-len 131072 --enable-expert-parallel --data-parallel-size 8 --enable-auto-tool-choice --tool-call-parser qwen3_coder

FP8

VLLM_ROCM_USE_AITER=1 vllm serve Qwen/Qwen3-Coder-480B-A35B-Instruct-FP8 --trust-remote-code --max-model-len 131072 --enable-expert-parallel --data-parallel-size 8 --enable-auto-tool-choice --tool-call-parser qwen3_coder

Comment on lines +187 to +199
### Step 4: Run Benchmark
Open a new terminal and run the following command to execute the benchmark script inside the container.
```shell
docker exec -it Qwen3-Coder vllm bench serve \
--model "Qwen/Qwen3-Coder-480B-A35B-Instruct" \
--dataset-name random \
--random-input-len 8192 \
--random-output-len 1024 \
--request-rate 10000 \
--num-prompts 16 \
--ignore-eos \
--trust-remote-code
```
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The purpose of Step 4 is unclear in relation to Step 3. After starting a server in Step 3, this step describes running what appears to be an offline benchmark, which wouldn't use the running server. To avoid confusion, please clarify if the intention is to benchmark the running server (in which case a client benchmark tool should be used) or to run a separate offline benchmark.

@haic0
Copy link
Contributor Author

haic0 commented Feb 3, 2026

@Isotr0py Could you help review the PR, thanks so much!



### Step 4: Run Benchmark
Open a new terminal and run the following command to execute the benchmark script inside the container.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@haic0 hi, must we use this other instruction? is it fine to use the benchmark instruction specify for the NVIDIA?

Because we should try to group the commands together under the command section like this PR #202
and this page
https://github.com/vllm-project/recipes/blob/main/InternLM/Intern-S1.md

@haic0
Copy link
Contributor Author

haic0 commented Feb 6, 2026

@tjtanaa Thanks, i have changed the bench command to make it same as NV. Pls check, Tks.



```shell
VLLM_ROCM_USE_AITER=1 vllm serve Qwen/Qwen3-Coder-480B-A35B-Instruct --trust-remote-code --max-model-len 131072 --enable-expert-parallel --data-parallel-size 8 --enable-auto-tool-choice --tool-call-parser qwen3_coder
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's make it multiline command and be consistent with existing format.


```shell

VLLM_ROCM_USE_AITER=1 vllm serve Qwen/Qwen3-Coder-480B-A35B-Instruct-FP8 --trust-remote-code --max-model-len 131072 --enable-expert-parallel --data-parallel-size 8 --enable-auto-tool-choice --tool-call-parser qwen3_coder
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's make it multiline command and be consistent with existing format.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@haic0 also address this. Make sure in all your PR also setup the command as multiline for better readability.



## AMD GPU Support
Recommended approaches by hardware type are:
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@haic0 Let's merge all the command together under the same header as CUDA. We shouldn't create whole new section. You can refer to this PR.

https://github.com/vllm-project/recipes/pull/219/changes

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants