Fix mbridge inference and use dynamic inference from mcore #627
Open
oyilmaz-nvidia wants to merge 5 commits intomainfrom
Open
Fix mbridge inference and use dynamic inference from mcore #627oyilmaz-nvidia wants to merge 5 commits intomainfrom
oyilmaz-nvidia wants to merge 5 commits intomainfrom
Conversation
… dynamic inference - Add nemo_deploy/llm/inference/nemo_utils.py which vendors standalone NeMo utilities (MCoreTokenizerWrappper, ckpt path helpers, constants) with no dependency on the nemo package, and re-exports the complex NeMo types (GPTConfig, T5Config, io, set_modelopt_spec_if_exists_in_ckpt) under a single HAVE_NEMO guard. - Remove direct from nemo.* imports from inference_base.py and tron_utils.py; both files now import from the local nemo_utils module instead. - Fix AttributeError in create_mcore_engine: GPTInferenceWrapper was called with (model, inference_context) but the deployed Megatron-LM API expects (model, inference_wrapper_config, inference_context). Add InferenceWrapperConfig built from model.config attributes; MCoreEngine then internally creates a DynamicInferenceContext and switches to DynamicInferenceEngine. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Contributor
Author
|
/ok to test c4dcc44 |
- Remove unused StaticInferenceContext import - Use inner model config for hidden_size/params_dtype instead of outer model - Add buffer_size_gb param to create_mcore_engine and MegatronLLMDeployable Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Contributor
Author
|
/ok to test ce646ce |
athitten
reviewed
Mar 4, 2026
| HAVE_NEMO, | ||
| MCoreTokenizerWrappper, | ||
| ckpt_to_context_subdir, | ||
| ckpt_to_weights_subdir, |
Contributor
There was a problem hiding this comment.
@oyilmaz-nvidia do we want to move nemo 2.0 functionality here ? Can't we just remove it since nemo 2.0 deployment code is already removed anyway
Contributor
Author
There was a problem hiding this comment.
So that's for importing the nemo and I'll have another PR to remove it. It's actually a lot more challenging than just adding here.
Signed-off-by: Onur Yilmaz <oyilmaz@nvidia.com>
Contributor
Author
|
/ok to test 3b99d12 |
Signed-off-by: Charlie Truong <chtruong@nvidia.com>
Signed-off-by: Charlie Truong <chtruong@nvidia.com>
Contributor
|
/ok to test c5fdd40 |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
… dynamic inference