[AMD/ROCM] Update Qwen3Guard-Gen#219
Conversation
Summary of ChangesHello @amd-asalykov, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request enhances the Qwen3Guard-Gen documentation by providing comprehensive instructions for users with AMD GPUs. It specifically details the installation process for vLLM with the AMD ROCm backend and guides users on how to run the model effectively on their AMD hardware, thereby broadening the accessibility and usability of Qwen3Guard-Gen for a wider range of hardware configurations. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request updates the documentation for Qwen3Guard-Gen to include instructions for AMD GPUs with ROCm. The changes add new sections for installation and running the model on AMD hardware. My review identifies a critical syntax error in the provided shell commands and suggests a restructuring of the 'Running' sections to improve clarity, fix the bug, and reduce command duplication, which will make the document easier to maintain.
Qwen/Qwen3Guard-Gen.md
Outdated
| ### Running Qwen3Guard-Gen on a Single GPU | ||
| ```bash | ||
| # Start server on a single GPU | ||
| vllm serve Qwen/Qwen3Guard-Gen-0.6B \ | ||
| --host 0.0.0.0 \ | ||
| --max-model-len 32768 | ||
| ``` | ||
|
|
||
| ### Running Qwen3Guard-Gen with AMD ROCm Backend | ||
| ```bash | ||
| export VLLM_ROCM_USE_AITER=1 | ||
| vllm serve Qwen/Qwen3Guard-Gen-0.6B \ | ||
| --host 0.0.0.0 \ | ||
| --max-model-len 32768 |
There was a problem hiding this comment.
The vllm serve commands are missing a line continuation character (\) on the --host line, which will cause them to fail when copied and pasted into a shell. This is a critical error that prevents the command from running as intended.
Additionally, to improve clarity and reduce redundancy, this suggestion combines the instructions for running on NVIDIA and AMD GPUs into a single, more organized section. This makes it easier for users to find the correct command for their hardware and avoids duplicating the vllm serve command, simplifying future maintenance.
| ### Running Qwen3Guard-Gen on a Single GPU | |
| ```bash | |
| # Start server on a single GPU | |
| vllm serve Qwen/Qwen3Guard-Gen-0.6B \ | |
| --host 0.0.0.0 \ | |
| --max-model-len 32768 | |
| ``` | |
| ### Running Qwen3Guard-Gen with AMD ROCm Backend | |
| ```bash | |
| export VLLM_ROCM_USE_AITER=1 | |
| vllm serve Qwen/Qwen3Guard-Gen-0.6B \ | |
| --host 0.0.0.0 \ | |
| --max-model-len 32768 | |
| ### Running Qwen3Guard-Gen | |
| **On NVIDIA GPUs:** | |
| ```bash | |
| vllm serve Qwen/Qwen3Guard-Gen-0.6B \ | |
| --host 0.0.0.0 \ | |
| --max-model-len 32768 |
On AMD GPUs (ROCm):
export VLLM_ROCM_USE_AITER=1
vllm serve Qwen/Qwen3Guard-Gen-0.6B \
--host 0.0.0.0 \
--max-model-len 32768
Qwen/Qwen3Guard-Gen.md
Outdated
| uv pip install -U vllm --torch-backend auto | ||
| ``` | ||
|
|
||
| ### Installing vLLM (AMD ROCm Backend: MI300X, MI325X, MI355X) |
There was a problem hiding this comment.
Can you help to make it into two subheaders under ### Installing vLLM?
CUDA
uv venv
source .venv/bin/activate
uv pip install -U vllm --torch-backend autoROCm
Note: The vLLM wheel for ROCm requires Python 3.12, ROCm 7.0, and glibc >= 2.35. If your environment does not meet these requirements, please use the Docker-based setup as described in the documentation.
uv venv
source .venv/bin/activate
uv pip install vllm --extra-index-url https://wheels.vllm.ai/rocm/There was a problem hiding this comment.
Hi @tjtanaa, thank you very much for the review! I've updated the PR. Could you please merge it?
|
Hi @tjtanaa, thank you very much for the review! I've updated the PR. Could you please merge it? |
|
Hi @tjtanaa, I've updated the following PRs in the same manner, could you please review/merge them? |
|
@tjtanaa just bumping up |
Update Qwen3Guard-Gen docs for AMD GPUs