Distilbart Xsum 9 6
DistilBART is a distilled version of the BART model, focusing on text summarization tasks, significantly improving inference speed while maintaining high performance.
Downloads 421
Release Time : 3/2/2022
Model Overview
A distilled model based on the BART architecture, specifically designed for generative text summarization tasks, balancing model performance and computational efficiency.
Model Features
Efficient Inference
Achieves 1.68x speedup compared to the original BART-large model, reducing inference time from 229ms to 137ms
Balanced Performance
Achieves Rouge-2 score of 22.12 on the XSUM dataset, close to the original BART-large's performance (21.85)
Lightweight Design
25% reduction in parameters (from 406M to 306M), making it more suitable for production deployment
Model Capabilities
Single-document summarization
Long-text compression
Key information extraction
Use Cases
News Media
News Brief Generation
Automatically generates key point summaries of news articles
Achieves professional-level summarization on the XSUM dataset
Business Analysis
Report Condensation
Compresses lengthy business reports into executive summaries
Retains key decision-making information while reducing text volume by 80%
Featured Recommended AI Models
Š 2025AIbase