-
Notifications
You must be signed in to change notification settings - Fork 199
Open
Description
Hi guys. Very nice work. I would like to know if I have to fine-tune a lora for Wan2.1. What are the correct steps? Can my LORA work on top of these TurboDiffusion weights? I suspect it won't because it was trained on the normal attention, not Spare Linear Attention. What's the best way to make my lora work with turbo diffusion?
- Merge my LORA weights into the original Wan2.1 base weights and then treat it as a new base model and train turbo diffusion?
- Or should I explore something like orthogonal finetuning?
Metadata
Metadata
Assignees
Labels
No labels