D

Deit Tiny Distilled Patch16 224

Developed by facebook
This model is a distilled version of the Data-efficient image Transformer (DeiT), pretrained and fine-tuned on ImageNet-1k at 224x224 resolution, efficiently learning from a teacher model through distillation.
Downloads 6,016
Release Time : 3/2/2022

Model Overview

This model is a distilled version of the Vision Transformer (ViT), using distillation tokens to efficiently learn from a teacher model (CNN) during pretraining and fine-tuning. Primarily used for image classification tasks.

Model Features

Distillation Learning
Efficiently learns from a teacher model (CNN) using distillation tokens, improving model performance.
Efficient Training
Pretrained and fine-tuned on ImageNet-1k at 224x224 resolution, achieving high training efficiency.
Tiny Size
With only 6M parameters, the model is suitable for resource-constrained environments.

Model Capabilities

Image Classification
Visual Feature Extraction

Use Cases

Image Classification
ImageNet Image Classification
Classifies images into one of the 1000 categories in ImageNet.
Top-1 accuracy 74.5%, Top-5 accuracy 91.9%.
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase