Skip to content

Hyperparameter Settings and Distillation Rounds in CIFAR-10 Knowledge Distillation #30

@ChangHanCQU

Description

@ChangHanCQU

A respectful salute to your work.

I am currently attempting to reproduce the CIFAR-10 knowledge distillation results in your paper, which are based on inversion and adversarial losses.

Could you please provide more details on the following points?

  1. For the distillation setting, should I use the same inversion hyperparameters as those listed under the CIFAR-10 directory? If not, could you kindly provide the corresponding hyperparameters?

  2. As I understand, your distillation process is iterative. In other words, you first generate a batch of synthetic samples, then train the student model on these samples, and repeat this cycle multiple times so that the adversarial loss can take effect. Could you confirm this understanding? Also, could you please provide the number of synthetic samples generated in each distillation round, as well as the total number of rounds you conducted?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions