D

Deit Small Distilled Patch16 224

Developed by facebook
The distilled DeiT model was pre-trained and fine-tuned on ImageNet-1k at 224x224 resolution, learning from a teacher CNN using distillation methods
Downloads 2,253
Release Time : 3/2/2022

Model Overview

This model is a distilled version of the Vision Transformer (ViT), learning from a teacher CNN through distillation tokens for image classification tasks

Model Features

Distillation Learning
Learns from a teacher CNN using distillation tokens to improve model performance
Efficient Training
Training can be completed in just 3 days on a single 8-GPU node
Small Model Size
Only 22 million parameters, suitable for deployment in resource-constrained environments

Model Capabilities

Image Classification
Visual Feature Extraction

Use Cases

Computer Vision
ImageNet Image Classification
Classify images into 1000 ImageNet categories
81.2% top-1 accuracy
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase