# Distilled Model

Deepseek R1 Distill Qwen 7B
DeepSeek-R1-Distill-Qwen-7B is a distilled large language model released by deepseek-ai, based on the Qwen-7B architecture, suitable for text generation tasks.
Large Language Model Transformers
D
mlx-community
1,045
4
Faster Whisper Large V3 French Distil Dec16
MIT
A distilled French version of Whisper-Large-V3, optimized for inference efficiency by reducing decoder layers while maintaining good performance
Speech Recognition Transformers French
F
brandenkmurray
25
3
Vit Base Patch16 224 Distilgpt2
Apache-2.0
DistilViT is an image caption generation model based on Vision Transformer (ViT) and distilled GPT-2, capable of converting images into textual descriptions.
Image-to-Text Transformers
V
tarekziade
17
0
Distil Whisper Large V3 German
Apache-2.0
A German speech recognition model based on distil-whisper technology, with 756 million parameters, achieving faster inference speeds while maintaining high quality.
Speech Recognition Transformers German
D
primeline
207
15
Distil Large V3
MIT
Distil-Whisper is a knowledge-distilled version of Whisper large-v3, focusing on English automatic speech recognition, offering faster inference speeds while maintaining accuracy close to the original model.
Speech Recognition English
D
distil-whisper
417.11k
311
Dist Mpnet Paracrawl Cs En
A distilled model based on BERT-small architecture, specifically designed for Czech-English semantic embedding
Text Embedding Transformers Supports Multiple Languages
D
Seznam
393
4
Sbert Chinese Qmc Finance V1 Distill
A lightweight sentence similarity model optimized for financial domain question matching, compressing 12-layer BERT to 4 layers through distillation technology, significantly improving inference efficiency
Text Embedding Transformers
S
DMetaSoul
20
3
Distil Wav2vec2 Adult Child Cls 37m
Apache-2.0
An audio classification model based on the wav2vec 2.0 architecture, designed to distinguish between adult and child voices
Audio Classification Transformers English
D
bookbot
15
2
Distilcamembert Base
MIT
DistilCamemBERT is a distilled version of the French CamemBERT model, significantly reducing model complexity while maintaining performance through knowledge distillation techniques.
Large Language Model Transformers French
D
cmarkea
15.79k
31
Distilbert Base Uncased
Apache-2.0
DistilBERT is a distilled version of the BERT base model, maintaining similar performance while being more lightweight and efficient, suitable for natural language processing tasks such as sequence classification and token classification.
Large Language Model English
D
distilbert
11.1M
669
Distilbert Base En De Cased
Apache-2.0
This is a lightweight version of distilbert-base-multilingual-cased, focused on bilingual processing of English and German, maintaining the representation capability and accuracy of the original model.
Large Language Model Transformers Other
D
Geotrend
23
0
Distilbert Base En Ar Cased
Apache-2.0
This is a distilled version of distilbert-base-multilingual-cased, specifically optimized for English and Arabic processing while maintaining the original model's accuracy.
Large Language Model Transformers Other
D
Geotrend
31
0
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase