cfm model trtllm conversion #1796
Draft
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This is conversion code for TRTLLM conversion for the estimator of the cfm model.
The conversion code is done mimicing the DIT and STDIT model code in TRTLLM
https://github.com/NVIDIA/TensorRT-LLM/tree/main/tensorrt_llm/models/stdit
https://github.com/NVIDIA/TensorRT-LLM/tree/main/tensorrt_llm/models/dit
The FP16 engine runs 5x faster on L4 GPU comparing with original Torch model. And the generated audio sounds clean.
example.wav
Conversion is done directly in the docker environment
soar97/triton-cosyvoice:25.06, no need extra installation. Instructions are inREADME.md.Unresolved Issue
The converted engine currently does not accept
attention_mask(Hence, it cannot do stream generation). The attention module is extended fromfrom tensorrt_llm.layers.attention import BertAttentionto add in the ROPE part. However, thebert attention plugindoes not accept attention mask.The alternative solution is to disable the
bert_attention_plugin, but the audio quality would be lower many of our testing voices.@yuekaizhang if you can take a look and let me know how to deal with this. Meanwhile I will try to see if I can use
from tensorrt_llm.layers.attention import Attentiondirectly.. the thing is there are many more parameters and I keeps getting errors when converting.