Skip to content

[WIP] Kimi K2.5 NVFP4 recipe#267

Draft
kedarpotdar-nv wants to merge 1 commit intovllm-project:mainfrom
kedarpotdar-nv:kimi-k2.5-fp4
Draft

[WIP] Kimi K2.5 NVFP4 recipe#267
kedarpotdar-nv wants to merge 1 commit intovllm-project:mainfrom
kedarpotdar-nv:kimi-k2.5-fp4

Conversation

@kedarpotdar-nv
Copy link
Contributor

No description provided.

@kedarpotdar-nv kedarpotdar-nv marked this pull request as draft March 4, 2026 22:15
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces documentation for the Kimi-K2.5-NVFP4 model, an optimized version of Kimi-K2.5 that leverages NVIDIA's quantization techniques. The added content explains the nature of this quantized model and its advantages for deployment, providing essential information for users looking to utilize this specific variant.

Highlights

  • Kimi-K2.5-NVFP4 Documentation: Added a new section detailing the Kimi-K2.5-NVFP4 checkpoint, an NVFP4-quantized version of the Kimi-K2.5 model.
  • Quantization Details: Described that the NVFP4 version is quantized using NVIDIA Model Optimizer, specifically quantizing weights and activations of linear operators within transformer blocks in MoE to FP4.
  • Efficiency Benefits: Highlighted the benefit of the NVFP4 quantization, enabling efficient deployment on fewer GPUs.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • moonshotai/Kimi-K2.5.md
    • Added a new section 'Running Kimi-K2.5-NVFP4' to the markdown file.
    • Included details about the NVFP4 quantization process and its benefits.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds a new section to the Kimi-K2.5.md documentation for the Kimi-K2.5-NVFP4 model. While the new section provides a description of the quantized model, it currently lacks the actual commands or 'recipe' to run it. My review includes a comment to address this missing information, which is especially relevant given the PR is titled as a 'recipe'.

Note: Security Review has been skipped due to the limited scope of the PR.

Comment on lines +90 to +91
## Running Kimi-K2.5-NVFP4
The [Kimi-K2.5-NVFP4](https://huggingface.co/nvidia/Kimi-K2.5-NVFP4) checkpoint is an NVFP4-quantized version of Kimi-K2.5, quantized using [NVIDIA Model Optimizer](https://github.com/NVIDIA/TensorRT-Model-Optimizer). Only the weights and activations of the linear operators within transformer blocks in MoE are quantized to FP4, enabling efficient deployment on fewer GPUs.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The section ## Running Kimi-K2.5-NVFP4 describes the model but doesn't include the command or 'recipe' to run it. Considering the PR title and the section title, it would be very helpful to add an example command, similar to the one provided for the base Kimi-K2.5 model.

@kedarpotdar-nv
Copy link
Contributor Author

@faradawn

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant