Skip to content

[About inverse_diffusion conditional generation] #1

@junmokane

Description

@junmokane

Hi. Thanks for the great work.

While reading the code, I couldn't find the function that takes class label and generate corresponding image in GFlowNet finetuned posterior model (p(x|c)). I noticed that in posterior baselines (DPS, LGD-MC), I could see that it takes class label as condition, and generate samples by applying classifier guidance (R(c,x)) to prior model (p(x)) in the code.

I found that there is something in langevin dynamics model that takes finetune_class, but I'm not sure how this is working. Could you elaborate more on this part? Sorry if I understood anything wrong.

Also, could you explain how the sampling works? It seems to be using classifier guidance for conditional sampling, while using this classifier guidance from GFN fine-tuned posterior seems new to me (Or is this a new thing? just wondering). Because in previous works (ADM, DPS, LGD-MC) mentioned in the paper, they approximate the posterior without any additional training (As far as I understand). In RTB, it additionally fine-tunes diffusion prior p(x) with LoRA to approximate p(x|c) (propto p(c|x)p(x)).
While the model does not take class label as input when sampling. Could you elaborate on this part? Please correct me if I'm wrong. Thanks.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions