Skip to content

Conversation

@Ratish1
Copy link
Contributor

@Ratish1 Ratish1 commented Jan 10, 2026

Description

This PR implements a Bitwise Weight Correctness Checker to verify that model weights synced from the Miles training engine (Megatron-LM) to the SGLang inference engine are bit-for-bit identical. #405

Key Changes

  1. Miles Training Engine
  • Arguments: Added --enable-weight-checker in miles/utils/arguments.py. Decoupled it from --check-weight-update-equal to allow independent bitwise verification.
  • Updated miles/ray/rollout.py to accept and propagate a payload dictionary in the check_weights method.
  • Updated miles/backends/sglang_utils/sglang_engine.py to include the payload in the weights_checker HTTP request.
  • Modified UpdateWeightFromTensor and UpdateWeightFromDistributed to compute SHA256 hashes of the full parameters (after gathering shards into HF format) on Rank 0.
  • Updated check_weights method to transmit these ground-truth hashes to SGLang engines using the compare_checksum action.
  1. SGLang changes here feat: add support for bitwise weight verification via checksums sgl-project/sglang#17009
  • IOStruct: Updated CheckWeightsReqInput to accept optional checksums (list | dict) and rank_offset (int).
  • Scheduler/ModelRunner: Propagated the checksum payload from the API endpoint to the WeightChecker.
  • WeightChecker: Implemented a 3-stage verification protocol:
    • Stage A (Direct Match): Verifies replicated parameters (e.g., Norms, Biases) directly.
    • Stage B (Shard Reconstruction): Gathers local Tensor Parallel (TP) shards from all workers and reconstructs the full parameter by concatenating along Dim 0 (ColumnParallel) or Dim 1 (RowParallel).
    • Stage C (Dtype Alignment): Casts reconstructed tensors to bfloat16 to ensure bit-perfect representation matching the trainer's source format before hashing.
  1. Tests
  • Created tests/test_weight_update_correctness.py which runs Qwen2.5-3B on 2 GPUs (TP=2). It validates that the weight update handshake completes successfully and triggers no bitwise mismatches.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @Ratish1, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the reliability of model deployment by introducing a robust bitwise weight correctness checker. This system ensures that model weights transferred from the Miles training framework to the SGLang inference engine maintain their exact bit-level representation, crucial for preventing subtle performance degradations or errors due to weight discrepancies in distributed environments. The implementation covers both the generation of ground-truth hashes on the training side and a sophisticated multi-stage verification process on the inference side, including handling of sharded parameters.

Highlights

  • Bitwise Weight Correctness Checker: A new mechanism has been implemented to verify bit-for-bit identical model weights between the Miles training engine (Megatron-LM) and the SGLang inference engine.
  • Miles Engine Enhancements: The Miles training engine now includes an --enable-weight-checker argument. Updates to miles/ray/rollout.py and miles/backends/sglang_utils/sglang_engine.py facilitate payload propagation, and weight update functions (UpdateWeightFromTensor, UpdateWeightFromDistributed) compute and transmit SHA256 hashes of full parameters on Rank 0.
  • SGLang Engine Integration: The SGLang inference engine has been patched to update CheckWeightsReqInput to accept checksums and rank_offset. This payload is propagated through the Scheduler and ModelRunner to a WeightChecker, which implements a 3-stage verification protocol: direct match for replicated parameters, shard reconstruction for Tensor Parallel (TP) layers, and dtype alignment to bfloat16 before hashing.
  • Comprehensive Testing: A new test file, tests/test_weight_update_correctness.py, has been added. This test validates the end-to-end functionality of the weight correctness checker using a Qwen2.5-3B model on 2 GPUs (TP=2), ensuring the weight update handshake and bitwise verification function correctly without mismatches.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a bitwise weight correctness checker, which is a valuable feature for ensuring model state synchronization between the training and inference engines. The implementation is comprehensive, covering both the training (Miles) and inference (SGLang) sides, along with integration tests. My review focuses on ensuring the correctness and robustness of this new checker. I've identified a few high-severity issues, including a potential data type mismatch during checksum calculation, an inconsistent condition for checksum generation, and a missing None check that could lead to a runtime error. I have also provided several medium-severity suggestions to improve code clarity, maintainability, and adherence to best practices. Addressing these points will enhance the reliability and quality of this new feature.

Comment on lines +230 to +233
if self.args.enable_weight_checker or self.args.check_weight_update_equal:
for name, tensor in converted_named_tensors:
t_cpu = tensor.detach().cpu().contiguous()
self._last_checksums[name] = hashlib.sha256(t_cpu.view(torch.uint8).numpy()).hexdigest()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The checksum is calculated here using the tensor's original data type. However, the corresponding verification logic on the SGLang side hardcodes a cast to torch.bfloat16 before hashing. This discrepancy will cause checksum validation to fail if the tensor's dtype is not already bfloat16. To ensure a correct bitwise comparison, you should cast the tensor to bfloat16 here as well before computing the hash.

Suggested change
if self.args.enable_weight_checker or self.args.check_weight_update_equal:
for name, tensor in converted_named_tensors:
t_cpu = tensor.detach().cpu().contiguous()
self._last_checksums[name] = hashlib.sha256(t_cpu.view(torch.uint8).numpy()).hexdigest()
if self.args.enable_weight_checker or self.args.check_weight_update_equal:
for name, tensor in converted_named_tensors:
t_cpu = tensor.to(torch.bfloat16).detach().cpu().contiguous()
self._last_checksums[name] = hashlib.sha256(t_cpu.view(torch.uint8).numpy()).hexdigest()

@zhaochenyang20
Copy link
Collaborator

Great work so far. I strongly suggest that we shall first submit the patch of SGLang to SGLang directly. Then come back to use it in Miles. Could we do it as I described. Thanks!

@Ratish1
Copy link
Contributor Author

Ratish1 commented Jan 13, 2026

Great work so far. I strongly suggest that we shall first submit the patch of SGLang to SGLang directly. Then come back to use it in Miles. Could we do it as I described. Thanks!

Yes, I will open a PR in SGlang today itself. Thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants