D

Distilbart Cnn 12 6

Developed by sshleifer
DistilBART is a distilled version of the BART model, specifically optimized for text summarization tasks, significantly improving inference speed while maintaining high performance.
Downloads 783.96k
Release Time : 3/2/2022

Model Overview

A lightweight text summarization model based on the BART architecture, compressed using knowledge distillation techniques, suitable for scenarios such as news summarization generation.

Model Features

Efficient Inference
Compared to the original BART model, inference speed is increased by 2.54 times (distilbart-xsum-12-1 version)
Performance Balance
Achieves a good balance between model compression and summarization quality, with Rouge-L scores close to the original BART model
Multiple Configuration Options
Offers various parameter configurations (e.g., 12-1, 6-6, etc.) to meet different speed-accuracy requirements

Model Capabilities

News Summarization Generation
Long Text Compression
Key Information Extraction

Use Cases

Media Industry
Automatic News Summarization
Generates concise summaries for long news articles
Achieves Rouge-2 score of 20.57 on the CNN/DailyMail dataset
Content Analysis
Document Key Information Extraction
Extracts core content from long documents
Achieves Rouge-L score of 33.37 on the XSum dataset
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
Š 2025AIbase