These are Vision Transformer models trained following the method described in the papers: "DINOv2: Learning Robust Visual Features without Supervision" and "Vision Transformers Need Registers".
We provide 8 models:
The model takes an image as input and returns a class token and patch tokens, and optionally 4 register tokens.
The embedding dimension is:
The models follow a Transformer architecture, with a patch size of 14. In the case of registers, we add 4 register tokens, learned during training, to the input sequence after the patch embedding.
For a 224x224 image, this results in 1 class token + 256 patch tokens, and optionally 4 register tokens.
The models can accept larger images provided the image shapes are multiples of the patch size (14). If this condition is not verified, the model will crop to the closest smaller multiple of the patch size.
Developed by: Meta AI
Model type: Vision Transformer
License: Apache License 2.0
Repository: https://github.com/facebookresearch/dinov2
The models are vision backbones providing multi-purpose features for downstream tasks.
The models can be used without fine-tuning, with downstream classifiers as simple as linear layers, to obtain competitive results:
It is technically possible to perform fine-tuning on the models, for small gains (we measured +2% on ImageNet-1k classification). We recommend keeping this as a very last step and only when necessary, as the features already provide good performance out-of-the-box.
Despite improvements thanks to the training method not using annotations, we still observe significant biases in our models toward rich households from Western countries.
We expect fine-tuning will increase the biases in the features produced by the model as they will be tuned to the fine-tuning labels.
Use the code below to get started with the model.
import torch
# DINOv2
dinov2_vits14 = torch.hub.load('facebookresearch/dinov2', 'dinov2_vits14')
dinov2_vitb14 = torch.hub.load('facebookresearch/dinov2', 'dinov2_vitb14')
dinov2_vitl14 = torch.hub.load('facebookresearch/dinov2', 'dinov2_vitl14')
dinov2_vitg14 = torch.hub.load('facebookresearch/dinov2', 'dinov2_vitg14')
# DINOv2 with registers
dinov2_vits14_reg = torch.hub.load('facebookresearch/dinov2', 'dinov2_vits14_reg')
dinov2_vitb14_reg = torch.hub.load('facebookresearch/dinov2', 'dinov2_vitb14_reg')
dinov2_vitl14_reg = torch.hub.load('facebookresearch/dinov2', 'dinov2_vitl14_reg')
dinov2_vitg14_reg = torch.hub.load('facebookresearch/dinov2', 'dinov2_vitg14_reg')
We refer users to the associated papers for the evaluation protocols.
ImageNet-1k | NYU-Depth v2 | SUN-RGBD | ADE20k | iNaturalist 2018 | Oxford-H | ||||
---|---|---|---|---|---|---|---|---|---|
model | with registers |
classif. (acc) | classif. (acc) | classif. V2 (acc) | depth (RMSE) | depth (RMSE) | segm. (mAP) | classif. (acc) | retrieval (mAP) |
k-NN | linear | linear | linear 4 layers |
NYU-D transfer | multiscale | linear | nearest neighbor | ||
ViT-S/14 | 79.0% | 81.1% | 70.8% | 0.417 | 0.431 | 47.2 | 69.5% | 43.2 | |
ViT-S/14 | 79.1% | 80.9% | 71.0% | N/A | N/A | N/A | 67.6% | 39.5 | |
ViT-B/14 | 82.1% | 84.5% | 74.9% | 0.362 | 0.400 | 51.3 | 76.3% | 49.5 | ViT-B/14 | 82.0% | 84.6% | 75.6% | N/A | N/A | N/A | 73.8% | 51.0 |
ViT-L/14 | 83.5% | 86.3% | 77.6% | 0.333 | 0.396 | 53.1 | 79.8% | 54.0 | |
ViT-L/14 | 83.8% | 86.7% | 78.5% | N/A | N/A | N/A | 80.9% | 55.7 | |
ViT-g/14 | 83.5% | 86.5% | 78.4% | 0.298 | 0.362 | 53.0 | 81.6% | 52.3 | |
ViT-g/14 | 83.7% | 87.1% | 78.8% | N/A | N/A | N/A | 81.5% | 58.2 |
Nvidia A100 GPUs
PyTorch 2.0, xFormers 0.0.18
BibTeX
@misc{oquab2023dinov2,
title={DINOv2: Learning Robust Visual Features without Supervision},
author={Oquab, Maxime and Darcet, Timothée and Moutakanni, Theo and Vo, Huy and Szafraniec, Marc and Khalidov, Vasil and Fernandez, Pierre and Haziza, Daniel and Massa, Francisco and El-Nouby, Alaaeldin and Howes, Russell and Huang, Po-Yao and Xu, Hu and Sharma, Vasu and Li, Shang-Wen and Galuba, Wojciech and Rabbat, Mike and Assran, Mido and Ballas, Nicolas and Synnaeve, Gabriel and Misra, Ishan and Jegou, Herve and Mairal, Julien and Labatut, Patrick and Joulin, Armand and Bojanowski, Piotr},
journal={arXiv:2304.07193},
year={2023}
}
@misc{darcet2023vitneedreg,
title={Vision Transformers Need Registers},
author={Darcet, Timothée and Oquab, Maxime and Mairal, Julien and Bojanowski, Piotr},
journal={arXiv:2309.16588},
year={2023}
}
此处可能存在不合适展示的内容,页面不予展示。您可通过相关编辑功能自查并修改。
如您确认内容无涉及 不当用语 / 纯广告导流 / 暴力 / 低俗色情 / 侵权 / 盗版 / 虚假 / 无价值内容或违法国家有关法律法规的内容,可点击提交进行申诉,我们将尽快为您处理。