Skip to content

Lora fine tuning #72

@mali-afridi

Description

@mali-afridi

Hi guys. Very nice work. I would like to know if I have to fine-tune a lora for Wan2.1. What are the correct steps? Can my LORA work on top of these TurboDiffusion weights? I suspect it won't because it was trained on the normal attention, not Spare Linear Attention. What's the best way to make my lora work with turbo diffusion?

  1. Merge my LORA weights into the original Wan2.1 base weights and then treat it as a new base model and train turbo diffusion?
  2. Or should I explore something like orthogonal finetuning?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions