Skip to content

[AMD/ROCM] Update Qwen3Guard-Gen#219

Merged
tjtanaa merged 6 commits intovllm-project:mainfrom
amd-asalykov:rocm-qwen3guard-gen
Feb 16, 2026
Merged

[AMD/ROCM] Update Qwen3Guard-Gen#219
tjtanaa merged 6 commits intovllm-project:mainfrom
amd-asalykov:rocm-qwen3guard-gen

Conversation

@amd-asalykov
Copy link
Contributor

Update Qwen3Guard-Gen docs for AMD GPUs

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @amd-asalykov, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enhances the Qwen3Guard-Gen documentation by providing comprehensive instructions for users with AMD GPUs. It specifically details the installation process for vLLM with the AMD ROCm backend and guides users on how to run the model effectively on their AMD hardware, thereby broadening the accessibility and usability of Qwen3Guard-Gen for a wider range of hardware configurations.

Highlights

  • AMD ROCm Backend Installation: New instructions have been added for installing vLLM specifically for AMD ROCm backend (MI300X, MI325X, MI355X), including prerequisites like Python 3.12, ROCm 7.0, and glibc >= 2.35, and the uv pip install command with a custom index URL.
  • Running with AMD ROCm Backend: A dedicated section has been introduced for running Qwen3Guard-Gen with the AMD ROCm Backend, demonstrating the necessary export VLLM_ROCM_USE_AITER=1 environment variable before executing the vllm serve command.
  • Documentation Clarity: Minor adjustments were made to the 'Running Qwen3Guard-Gen on a Single GPU' section for improved clarity and consistency.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request updates the documentation for Qwen3Guard-Gen to include instructions for AMD GPUs with ROCm. The changes add new sections for installation and running the model on AMD hardware. My review identifies a critical syntax error in the provided shell commands and suggests a restructuring of the 'Running' sections to improve clarity, fix the bug, and reduce command duplication, which will make the document easier to maintain.

Comment on lines 25 to 37
### Running Qwen3Guard-Gen on a Single GPU
```bash
# Start server on a single GPU
vllm serve Qwen/Qwen3Guard-Gen-0.6B \
--host 0.0.0.0 \
--max-model-len 32768
```

### Running Qwen3Guard-Gen with AMD ROCm Backend
```bash
export VLLM_ROCM_USE_AITER=1
vllm serve Qwen/Qwen3Guard-Gen-0.6B \
--host 0.0.0.0 \
--max-model-len 32768
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The vllm serve commands are missing a line continuation character (\) on the --host line, which will cause them to fail when copied and pasted into a shell. This is a critical error that prevents the command from running as intended.

Additionally, to improve clarity and reduce redundancy, this suggestion combines the instructions for running on NVIDIA and AMD GPUs into a single, more organized section. This makes it easier for users to find the correct command for their hardware and avoids duplicating the vllm serve command, simplifying future maintenance.

Suggested change
### Running Qwen3Guard-Gen on a Single GPU
```bash
# Start server on a single GPU
vllm serve Qwen/Qwen3Guard-Gen-0.6B \
--host 0.0.0.0 \
--max-model-len 32768
```
### Running Qwen3Guard-Gen with AMD ROCm Backend
```bash
export VLLM_ROCM_USE_AITER=1
vllm serve Qwen/Qwen3Guard-Gen-0.6B \
--host 0.0.0.0 \
--max-model-len 32768
### Running Qwen3Guard-Gen
**On NVIDIA GPUs:**
```bash
vllm serve Qwen/Qwen3Guard-Gen-0.6B \
--host 0.0.0.0 \
--max-model-len 32768

On AMD GPUs (ROCm):

export VLLM_ROCM_USE_AITER=1
vllm serve Qwen/Qwen3Guard-Gen-0.6B \
  --host 0.0.0.0 \
  --max-model-len 32768

uv pip install -U vllm --torch-backend auto
```

### Installing vLLM (AMD ROCm Backend: MI300X, MI325X, MI355X)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you help to make it into two subheaders under ### Installing vLLM?

CUDA

uv venv
source .venv/bin/activate
uv pip install -U vllm --torch-backend auto

ROCm

Note: The vLLM wheel for ROCm requires Python 3.12, ROCm 7.0, and glibc >= 2.35. If your environment does not meet these requirements, please use the Docker-based setup as described in the documentation.

uv venv
source .venv/bin/activate
uv pip install vllm --extra-index-url https://wheels.vllm.ai/rocm/

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @tjtanaa, thank you very much for the review! I've updated the PR. Could you please merge it?

@amd-asalykov
Copy link
Contributor Author

Hi @tjtanaa, thank you very much for the review! I've updated the PR. Could you please merge it?

Copy link

@tjtanaa tjtanaa left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@tjtanaa tjtanaa merged commit c3b65d3 into vllm-project:main Feb 16, 2026
1 of 2 checks passed
@amd-asalykov
Copy link
Contributor Author

Hi @tjtanaa, I've updated the following PRs in the same manner, could you please review/merge them?

  1. Create Hunyuan-Instruct for AMD GPU #148
  2. Update HunyuanOCR.md for AMD GPU #149
  3. Update Seed-OSS-36B.md for AMD GPU  #157

@amd-asalykov
Copy link
Contributor Author

@tjtanaa just bumping up

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants