Distilbart Mnli 12 1
DistilBart-MNLI is a distilled version obtained from bart-large-mnli through teacher-free distillation technology, maintaining high accuracy while being more compact.
Downloads 217.48k
Release Time : 3/2/2022
Model Overview
This model is a natural language inference model based on the BART architecture, specifically designed for zero-shot classification tasks.
Model Features
Efficient distillation
Uses teacher-free distillation technology, alternately copying layer structures from bart-large-mnli, significantly reducing model size
Performance retention
Despite being smaller, it maintains accuracy close to the original model on the MNLI dataset
Multi-layer configuration options
Offers various layer configurations (12-1, 12-3, 12-6, 12-9) to balance performance and efficiency as needed
Model Capabilities
Natural language inference
Zero-shot classification
Text classification
Use Cases
Text classification
Zero-shot sentiment analysis
Classifies text sentiment without specific training
Topic classification
Classifies documents by topic
Natural language understanding
Textual entailment judgment
Determines the logical relationship between two texts (entailment/neutral/contradiction)
Achieves about 89% accuracy on the MNLI dataset
Featured Recommended AI Models