D

Distilbert Base Multilingual Cased

Developed by distilbert
DistilBERT is a distilled version of the BERT base multilingual model, retaining 97% of BERT's performance with fewer parameters and faster speed. It supports 104 languages and is suitable for various natural language processing tasks.
Downloads 2.8M
Release Time : 3/2/2022

Model Overview

This model is a lightweight version of the BERT base multilingual model, trained using knowledge distillation techniques to maintain high performance while reducing model size and computational requirements. Primarily used for fine-tuning downstream tasks such as text classification, named entity recognition, and question answering.

Model Features

Multilingual Support
Supports text processing in 104 languages, including major European and Asian languages
Efficient Inference
Approximately 2x faster inference speed compared to the original BERT model
Knowledge Distillation
Retains 97% of the original model's performance through distillation while significantly reducing model size
Case Sensitivity
Can distinguish between cases, e.g., 'english' and 'English' are treated differently

Model Capabilities

Text Understanding
Language Modeling
Multilingual Text Processing
Downstream Task Fine-tuning

Use Cases

Natural Language Processing
Cross-Lingual Text Classification
Classifying text in multilingual environments
Performs well on the XNLI dataset
Named Entity Recognition
Identifying entities such as person names, locations, and organizations in text
Question Answering System
Building multilingual question answering systems
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase