Skip to content

Conversation

@zmy1116
Copy link

@zmy1116 zmy1116 commented Jan 19, 2026

This is conversion code for TRTLLM conversion for the estimator of the cfm model.

The conversion code is done mimicing the DIT and STDIT model code in TRTLLM
https://github.com/NVIDIA/TensorRT-LLM/tree/main/tensorrt_llm/models/stdit
https://github.com/NVIDIA/TensorRT-LLM/tree/main/tensorrt_llm/models/dit

The FP16 engine runs 5x faster on L4 GPU comparing with original Torch model. And the generated audio sounds clean.
example.wav

Conversion is done directly in the docker environment soar97/triton-cosyvoice:25.06, no need extra installation. Instructions are in README.md.

Unresolved Issue

The converted engine currently does not accept attention_mask (Hence, it cannot do stream generation). The attention module is extended from from tensorrt_llm.layers.attention import BertAttention to add in the ROPE part. However, the bert attention plugin does not accept attention mask.

        if default_net().plugin_config.bert_attention_plugin:
            # TRT plugin mode
            assert input_lengths is not None
            context = bert_attention(
                qkv,
                input_lengths,
                self.num_attention_heads,
                self.attention_head_size,
                q_scaling=self.q_scaling,
                relative_attention=self.relative_attention,
                max_distance=self.max_distance,
                relative_attention_bias=self.rel_attn_table.value
                if self.relative_attention else None,
                max_input_length=max_input_length,
                cp_group=self.cp_group,
                cp_size=self.cp_size,
                cp_rank=self.cp_rank)
        else:

The alternative solution is to disable the bert_attention_plugin, but the audio quality would be lower many of our testing voices.

@yuekaizhang if you can take a look and let me know how to deal with this. Meanwhile I will try to see if I can use from tensorrt_llm.layers.attention import Attention directly.. the thing is there are many more parameters and I keeps getting errors when converting.

@zmy1116 zmy1116 marked this pull request as draft January 19, 2026 02:43
@yuekaizhang
Copy link
Contributor

@zmy1116 Thanks for the amazing work!

For the current code, at least it is very useful for offline TTS.

I think you may also try to test with new pytorch workflow. A bert example: https://github.com/NVIDIA/TensorRT-LLM/blob/main/tensorrt_llm/_torch/models/modeling_bert.py

It has a much easy attention interface and supports custom attention mask https://github.com/NVIDIA/TensorRT-LLM/blob/main/tensorrt_llm/_torch/attention_backend/interface.py#L634.

@zmy1116 zmy1116 mentioned this pull request Jan 21, 2026
@zmy1116
Copy link
Author

zmy1116 commented Jan 21, 2026

@zmy1116 Thanks for the amazing work!

For the current code, at least it is very useful for offline TTS.

I think you may also try to test with new pytorch workflow. A bert example: https://github.com/NVIDIA/TensorRT-LLM/blob/main/tensorrt_llm/_torch/models/modeling_bert.py

It has a much easy attention interface and supports custom attention mask https://github.com/NVIDIA/TensorRT-LLM/blob/main/tensorrt_llm/_torch/attention_backend/interface.py#L634.

thanks! a torch like api interface is definitely helpful... i will try to go through it this weekend

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants