Skip to content

config modification for mar-base training on cifar-100 #107

@cyboTiger

Description

@cyboTiger

Great work! I'm trying to conduct a verification experiment on the pretraining on cifar dataset (img_size 32x32) and evaluate the trained mar-base model.

So I'm wondering if you have any suggestions on modifications on eval config? Here's mine:

torchrun --nproc_per_node=8 --nnodes=1 --node_rank=${NODE_RANK} --master_addr=${MASTER_ADDR} --master_port=${MASTER_PORT} \
main_mar.py \
--img_size 32 --vae_path pretrained_models/vae/kl16.ckpt --vae_embed_dim 16 --vae_stride 16 --patch_size 1 \
--model ${MAR_SIZE} --diffloss_d 6 --diffloss_w 1024 \
--epochs 400 --warmup_epochs 100 --batch_size 64 --blr 1.0e-4 --diffusion_batch_mul 4 \
--output_dir ${OUTPUT_DIR} --wandb \
--data_path ${DATA_PATH} \
# below is eval-related configs
--online_eval --eval_bsz 256 \
--cfg 2.9 --cfg_schedule linear  --temperature 1.0 \
--num_iter 4 --num_sampling_steps 100 \
--eval_freq 20 --save_last_freq 5 \
--use_cached --cached_path ${CACHED_PATH} \
# --resume ${OUTPUT_DIR}

In particular, i think num_sampling_steps is most likely to be modified because cifar img size is only 32x32. Should I change it or stay the same?

Also, it seems that to evaluate FID and IS score on 32x32 generated images I'll need some fid_statistics_file other than fid_stats/adm_in256_stats.npz in this repo, where can I get it?

Thanks for your response in advance!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions