Skip to content

Question about the code #2

@FriedRonaldo

Description

@FriedRonaldo

Hi, yaxingwang. Thanks for the interesting work. I have a question about the code.

Before the question, I explain my understanding.

I read your paper then I understood that SEMIT seems to require two stages to train under a semi-supervised setting -- 1. Training an auxiliary classifier with NTPL (100). 2. Training the model (translation model) with the labeled and pseudo-labeled samples.

However, I can not find the source code of the NTPL architecture and pseudo-labeling part (F_theta and M_psi).
And I also checked if the file (animals_list_train.txt) contains the pseudo-labels, however, it contains the ground-truth labels.
After checking the codes and appendices, I found that the hyperparameters of the classifier loss are selected as [23].

Then, should I use the classification codes from PENCIL GITHUB to get the pseudo-labels?
And, is the current form of the code in this repository for the supervised setting (100% of samples are labeled) ?

Is it right that the training requires three steps? -- 1. Train the aux. classifier with labeled samples only. 2. Correct the noisy labels with PENCIL and get pseudo-labels(classification loss, correction 100 times). 3. Train SEMIT with the labels and pseudo-labels.

Thanks.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions