This repository contains the official implementation of the paper accepted at ICVGIP 2025.
📖 Paper
The human visual system is remarkably adept at adapting to changes in the input distribution — a capability modern convolutional neural networks (CNNs) still struggle to match. Drawing inspiration from the developmental trajectory of human vision, we propose a progressive blurring curriculum to improve the generalization and robustness of CNNs. Human infants are born with poor visual acuity, gradually refining their ability to perceive fine details. Mimicking this process, we begin training CNNs on highly blurred images during the initial epochs and progressively reduce the blur as training advances. This approach encourages the network to prioritize global structures over high-frequency artifacts, improving robustness against distribution shifts and noisy inputs. Challenging prior claims that blurring in the initial training epochs imposes a stimulus deficit and irreversibly harms model performance, we reveal that early-stage blurring enhances generalization with minimal impact on in-domain accuracy. Our experiments demonstrate that the proposed curriculum reduces mean corruption error (mCE) by up to 8.30% on CIFAR-10-C and 4.43% on ImageNet-100-C datasets, compared to standard training without blurring. Unlike static blur-based augmentation, which applies blurred images randomly throughout training, our method follows a structured progression, yielding consistent gains across various datasets. Furthermore, our approach complements other augmentation techniques, such as CutMix and MixUp, and enhances both natural and adversarial robustness against common attack methods.
We recommend using conda. Create a new virtual environment with conda:
conda env create -f environment.ymlActivate the environment:
conda activate vacThe code has been tested with Ubuntu 20.04 and NVIDIA Tesla V100.
Train ResNet-18 on CIFAR-10 using vanilla training:
python train_vac.py --cfg configs/cifar10_preactresnet18_vanilla.yaml
Train ResNet-18 on CIFAR-10 using VAC:
python train_vac.py --cfg configs/cifar10_preactresnet18_vac.yaml
The script train_vac.py also supports
- Data augmentations like CutMix, MixUp, RandAugment, and AutoAugment
- Other training strategies like SuperLoss
- Variants of the curriculum like Constant Blur, No Replay, Linear Curriculum, and Inverse Curriculum
These can be achieved by setting the appropriate input arguments. Some samples are provided in the configs/ directory.
Some other supported training scripts are:
train_fixres_finetune.py- Train using FixRestrain_vac_continuous.py- Train using a continuous curriculum
To evaluate trained models on clean and corrupted dataset, run test_corruption.py.
python test_corruption.py --net_type preactresnet18 --dataset cifar10 --pretrained <model_path>
To evaluate on adversarial attacks, run test_adversarial.py:
python test_adversarial.py --net_type preactresnet18 --dataset cifar10 --pretrained <model_path>
Note: Exact numbers may vary slightly due to random seeds and hardware differences.
If you find this work useful, please cite us:
@misc{raj2025mimickinghumanvisualdevelopment,
title={Mimicking Human Visual Development for Learning Robust Image Representations},
author={Ankita Raj and Kaashika Prajaapat and Tapan Kumar Gandhi and Chetan Arora},
year={2025},
eprint={2512.14360},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2512.14360},
}Ankita Raj (ankita.raj@cse.iitd.ac.in)
Our implementation is based on the following repositories:
