Distilbart Xsum 1 1
DistilBART is a distilled version of the BART model, optimized for text summarization tasks, significantly reducing model size and inference time while maintaining high performance.
Downloads 2,198
Release Time : 3/2/2022
Model Overview
A distilled model based on the BART architecture, primarily used for generative text summarization tasks, supporting English text processing. Compresses the original model through knowledge distillation techniques for more efficient inference.
Model Features
Efficient Inference
Compared to the original BART-large model, inference speed is increased by 1.68 times while maintaining over 90% of Rouge scores.
Multi-version Configuration
Provides sub-versions with different parameter scales (12-1/6-6/12-3, etc.) to balance performance and efficiency requirements.
Dual Dataset Adaptation
Optimized for CNN/DailyMail and XSum datasets respectively, catering to different summarization style needs.
Model Capabilities
Generative Text Summarization
Long Text Compression
Key Information Extraction
Use Cases
News Media
News Brief Generation
Automatically compresses long news reports into concise summaries
Achieves 22.12 Rouge-2 score on the XSum dataset
Content Analysis
Document Summary Generation
Extracts core content from long documents to form executive summaries
Achieves 21.26 Rouge-2 score on the CNN/DailyMail dataset
Featured Recommended AI Models
Š 2025AIbase