AIbase
Home
AI Tools
AI Models
MCP
AI NEWS
EN
Model Selection
Tags
Cross-modal Distillation

# Cross-modal Distillation

Tinyclip ViT 39M 16 Text 19M YFCC15M
MIT
TinyCLIP is an innovative cross-modal distillation approach for large-scale language-image pre-trained models, achieving the optimal balance between speed and accuracy through affinity mimicking and weight inheritance techniques.
Text-to-Image Transformers
T
wkcn
654
0
Tinyclip ViT 40M 32 Text 19M LAION400M
MIT
TinyCLIP is an innovative cross-modal distillation method for large-scale language-image pre-trained models, achieving efficient training of small-scale CLIP models through affinity mimicking and weight inheritance techniques.
Text-to-Image Transformers
T
wkcn
4,675
5
Clip Vit B 32 Japanese V1
This is a Japanese CLIP text/image encoder model converted from the English CLIP model through distillation techniques.
Text-to-Image Transformers Japanese
C
sonoisa
690
21
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
English简体中文繁體中文にほんご
© 2025AIbase