D

Distilcamembert Base

Developed by cmarkea
DistilCamemBERT is a distilled version of the French CamemBERT model, significantly reducing model complexity while maintaining performance through knowledge distillation techniques.
Downloads 15.79k
Release Time : 3/2/2022

Model Overview

This model is a distilled version of the French RoBERTa model CamemBERT, suitable for various natural language processing tasks such as text classification and semantic matching.

Model Features

Knowledge Distillation Technique
Significantly reduces model complexity while maintaining performance through distillation techniques, with loss functions including distillation loss, cosine loss, and MLM loss.
High Performance
Excels in multiple French NLP tasks, such as achieving an F1 score of 83% in text classification and 98% in named entity recognition.
Lightweight
Compared to the original CamemBERT model, the distilled version is more lightweight and suitable for resource-constrained environments.

Model Capabilities

Text classification
Semantic matching
Natural language inference
Named entity recognition
Masked filling

Use Cases

Text Processing
Text Classification
Classify French texts, such as sentiment analysis or topic classification.
Achieves an F1 score of 83% on the FLUE dataset.
Semantic Matching
Determine the semantic similarity between two French texts.
Achieves an F1 score of 77% on the FLUE dataset.
Information Extraction
Named Entity Recognition
Identify named entities from French texts, such as person names and locations.
Achieves an F1 score of 98% on the wikiner_fr dataset.
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase