Skip to content

Llama 3.1 update for AMD GPU#211

Open
hyukjlee wants to merge 5 commits intovllm-project:mainfrom
hyukjlee:llama_3_1
Open

Llama 3.1 update for AMD GPU#211
hyukjlee wants to merge 5 commits intovllm-project:mainfrom
hyukjlee:llama_3_1

Conversation

@hyukjlee
Copy link

Requesting review for Llama 3.1 PR.

Signed-off-by: hyukjlee <hyukjlee@amd.com>
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @hyukjlee, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a new documentation guide aimed at assisting users in deploying and evaluating the Llama 3.1 8B Instruct model on AMD MI300X/MI355X GPUs. The guide outlines the necessary prerequisites and provides clear instructions for setting up the environment using vLLM, performing inference, and conducting performance benchmarks to ensure optimal utilization of AMD hardware for large language models.

Highlights

  • New Quick Start Guide: A new Markdown file, Llama/Llama3.1_AMD.md, has been added, providing a comprehensive quick start guide for deploying the Llama 3.1 8B Instruct model.
  • AMD GPU Support: The guide specifically targets AMD MI300X/MI355X GPUs, detailing how to leverage vLLM for efficient model serving on this hardware.
  • Deployment and Benchmarking Steps: It includes step-by-step instructions for setting up the vLLM Docker image, starting the vLLM online server, running inference, and performing performance benchmarks.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

Thank you for adding this recipe for Llama 3.1 on AMD hardware. The guide is clear and easy to follow. I've made a few suggestions to improve clarity and consistency with other recipes in the repository. My feedback includes correcting an inconsistency in the listed supported GPUs, simplifying a Docker command for better readability, and renaming a section header to more accurately reflect its content.

-tp $TP &
```

### 3. Running Inference using benchmark script
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The heading "Running Inference using benchmark script" is a bit misleading since the command that follows is a single curl request to test inference, not a benchmark script. For consistency with other recipes in this repository which use "Basic Test" for similar checks, I recommend renaming this section.

Suggested change
### 3. Running Inference using benchmark script
### 3. Basic Test

hyukjlee and others added 4 commits January 28, 2026 10:40
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Signed-off-by: Hyukjoon Lee <hyukjlee@amd.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Signed-off-by: Hyukjoon Lee <hyukjlee@amd.com>
Signed-off-by: Hyukjoon Lee <hyukjlee@amd.com>
Signed-off-by: Hyukjoon Lee <hyukjlee@amd.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant