D

Distilbart Xsum 12 6

Developed by sshleifer
DistilBART is a distilled version of the BART model, focusing on text summarization tasks, significantly reducing model size and inference time while maintaining high performance.
Downloads 1,446
Release Time : 3/2/2022

Model Overview

DistilBART is a lightweight text summarization model based on the BART architecture, compressed through knowledge distillation techniques, suitable for scenarios like news summarization.

Model Features

Efficient Inference
2.54x faster inference speed compared to the original BART model while maintaining good summarization quality
Lightweight Architecture
Only 222M parameters, approximately 45% reduction compared to the original BART-large (406M)
Multiple Version Options
Provides model versions with different compression ratios to balance performance and efficiency

Model Capabilities

Text Summarization
Long Text Compression
Key Information Extraction

Use Cases

News Media
Automatic News Summarization
Compress lengthy news reports into concise summaries
Achieves Rouge-L score of 33.37 on CNN/Daily Mail dataset
Content Analysis
Document Key Information Extraction
Extract core content from long documents
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
Š 2025AIbase