D

Distilbart Xsum 6 6

Developed by sshleifer
DistilBART is a distilled version of the BART model, focusing on text summarization tasks, significantly reducing model size and inference time while maintaining high performance.
Downloads 147
Release Time : 3/2/2022

Model Overview

DistilBART is a lightweight text summarization model based on the BART architecture, compressed from the original model using knowledge distillation techniques, suitable for generating concise and accurate summaries.

Model Features

Efficient Inference
2.54x faster inference speed compared to the original BART model
Lightweight
Approximately 45% parameter reduction, from 406M to 222M
Balanced Performance
Achieves a good balance between model size and summarization quality (Rouge scores)

Model Capabilities

Text Summarization Generation
Long Text Compression
Key Information Extraction

Use Cases

News Summarization
News Article Summarization
Automatically compresses lengthy news articles into concise summaries
Achieves Rouge-L of 33.37 on CNN/DailyMail dataset
Content Summarization
Document Summarization
Generates executive summaries for long documents
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
Š 2025AIbase