Distilbart Mnli 12 9
DistilBart-MNLI is a lightweight version distilled from bart-large-mnli using teacher-free distillation technology, maintaining high accuracy while reducing model complexity.
Downloads 8,343
Release Time : 3/2/2022
Model Overview
This model is primarily used for zero-shot classification tasks, especially suitable for natural language inference (NLI) scenarios. It is a lightweight distilled version of bart-large-mnli, obtained by alternately copying layer structures and fine-tuning on the same data.
Model Features
Efficient distillation
Uses teacher-free distillation technology, alternately copying layer structures from bart-large-mnli, significantly reducing model size
High performance retention
Maintains accuracy close to the original model on the MNLI dataset, with minimal performance degradation
Multiple version options
Provides multiple versions with different layer counts (12-1, 12-3, 12-6, 12-9), allowing users to balance performance and efficiency as needed
Model Capabilities
Natural language inference
Zero-shot classification
Text classification
Use Cases
Text analysis
Sentiment analysis
Classify text sentiment tendencies without specific training
Topic classification
Multi-category topic classification of text content
Question answering systems
Question understanding
Analyze the semantic relationship between questions and candidate answers
Featured Recommended AI Models