-
Notifications
You must be signed in to change notification settings - Fork 7
Open
Description
Hi,
Thank you for your excellent work. I have a question about NeoVerse’s runtime. From my understanding, you use a distillation-based approach to reduce the number of inference steps at test time. Could you share more details about how this is done?
For example, did you use LightX2V (or a similar method) to distill the model after the original training, train a LoRA for few-step inference, and then apply that LoRA to both the main branch and the control/context branch?
Thanks for your time!
Metadata
Metadata
Assignees
Labels
No labels