Skip to content

Knowledge distillation from ViT (e.g. DINOv3) to CNN.

License

Notifications You must be signed in to change notification settings

kengboon/KD-ViT2CNN

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

KD-ViT2CNN

The code in this repository demonstrates knowledge distillation from a Vision Transformer (ViT) to a Convolutional Neural Network (CNN).

Background

Knowledge distillation is a technique that transfer the learnings of a large pretrained model ("teacher model") to a smaller model ("student model").

In this repository, the distillation is from a ViT (i.e. DINOv3) to a CNN, that involve heterogenous feature distillation. Additional feature projector is needed to align the CNN's spatial feature map. with the ViT's token-based features.

Distillation Dataset

The selection of distillation dataset involving balancing of generalization and task relevance.

Since the student network is also pretrained with large dataset (e.g. ImageNet), the intuition is it already has powerful generalization capability as well.

The distillation dataset will comprises of small amount of general data and larger portion of task-specific data.

About

Knowledge distillation from ViT (e.g. DINOv3) to CNN.

Topics

Resources

License

Stars

Watchers

Forks

Sponsor this project

Languages