Distilbart Mnli 12 6
DistilBart-MNLI is a distilled version of BART-large-MNLI, using teacher-free distillation technology, significantly reducing model size while maintaining high performance.
Downloads 49.63k
Release Time : 3/2/2022
Model Overview
This model is a distilled version for zero-shot classification tasks, based on the BART architecture, specifically optimized for MNLI (Multi-Genre Natural Language Inference) tasks.
Model Features
Efficient distillation
Uses teacher-free distillation technology, alternately copying layers from bart-large-mnli, significantly reducing model size
High performance retention
Maintains accuracy close to the original model on MNLI tasks (matched accuracy 89.19%, mismatched accuracy 89.01%)
Multiple specifications
Provides distilled versions with different layer counts (12-1, 12-3, 12-6, 12-9) to meet various performance needs
Model Capabilities
Zero-shot classification
Natural language inference
Text classification
Use Cases
Text analysis
Sentiment analysis
Classify text sentiment without fine-tuning
Topic classification
Automatically classify document content
Semantic understanding
Textual entailment
Determine logical relationships between two texts (entailment/contradiction/neutral)
Achieves 89.19% accuracy on MNLI dataset
Featured Recommended AI Models
ยฉ 2025AIbase