DINOv2: Self-Distillation for Vision Without Labels
DINOv2 is a powerful self-supervised vision model that learns visual representations without using labels. It builds on the original DINO framework, using a student–teacher architecture and advanced augmentations to produce strong, semantically rich embeddings.
1. Student–Teacher Architecture
DINOv2 uses two networks:
- a student network with parameters \( \theta \)
- a teacher network with parameters \( \xi \)
Both networks receive different augmented views of the same image.
The student learns by matching the teacher’s output distribution. The teacher is updated using an exponential moving average (EMA) of the student.
2. Image Embeddings
The student and teacher networks (often Vision Transformers) produce output embeddings:
These representations are then projected by small MLP heads (projection heads) to produce logits:
The logits are converted into probability distributions using temperature-scaled softmax.
3. Teacher and Student Distributions
\( \tau_s \): student temperature (higher → smoother)
\( \tau_t \): teacher temperature (lower → sharper)
\( c \): centering vector to prevent collapse
The teacher’s output is sharper to provide strong training targets, and the centering vector stabilizes training.
4. DINO Loss: Cross-Entropy Between Student and Teacher
The main training objective is to make the student match the teacher:
The teacher output is stop-gradient, so no gradient flows into the teacher.
This self-distillation forces the model to develop rich, semantically coherent features.
5. Teacher Update: Exponential Moving Average (EMA)
The teacher is never directly optimized. Instead, it is updated as a smoothed version of the student:
This stabilizes training and prevents collapse.
6. Normalized Embeddings
DINOv2 normalizes embeddings to lie on a hypersphere:
This is crucial for usable, consistent representations for:
- image retrieval
- clustering
- semantic search
7. Advanced Multi-Crop Augmentation
DINOv2 uses multi-view augmentations:
- Global crops: large views
- Local crops: small, zoomed-in views
The student sees all crops; the teacher sees only global crops.
8. Final Representation Quality
After training, the backbone’s output embeddings are directly used for downstream tasks. DINOv2 achieves strong performance even without finetuning.
9. Summary of Key Ideas
- No labels needed (self-supervised)
- Student tries to match teacher outputs
- Teacher updated via EMA (not trained directly)
- Temperature scaling + centering prevent collapse
- Multi-crop augmentation enhances invariance
- Produces state-of-the-art visual embeddings
References
Oquab, M., Darcet, T., Moutakanni, T., et al. (2023). DINOv2: Learning Robust Visual Features without Supervision. arXiv preprint, arXiv:2304.07193.
License & Attribution
This blog includes video/media from the DINOv2 GitHub repository, which is licensed under the Apache License 2.0.
You must cite the original work if you use DINOv2 in research:
@misc{oquab2023dinov2,
title={DINOv2: Learning Robust Visual Features without Supervision},
author={Oquab, Maxime and Darcet, Timothée and Moutakanni, Théo and Vo, Huy V. and Szafraniec, Marc and Khalidov, Vasil and Fernandez, Pierre and Haziza, Daniel and Massa, Francisco and El-Nouby, Alaaeldin and Howes, Russell and Huang, Po-Yao and Xu, Hu and Sharma, Vasu and Li, Shang-Wen and Galuba, Wojciech and Rabbat, Mike and Assran, Mido and Ballas, Nicolas and Synnaeve, Gabriel and Misra, Ishan and Jegou, Herve and Mairal, Julien and Labatut, Patrick and Joulin, Armand and Bojanowski, Piotr},
journal={arXiv:2304.07193},
year={2023}
}

Comments
Post a Comment