D

Distilbart Cnn 12 3

Developed by sshleifer
DistilBART is a distilled version of the BART model, focusing on text summarization tasks, significantly reducing model size and inference time while maintaining high performance.
Downloads 145
Release Time : 3/2/2022

Model Overview

A lightweight text summarization model based on the BART architecture, compressed using knowledge distillation techniques, suitable for scenarios like news summarization.

Model Features

Efficient Inference
1.68x faster inference speed compared to the original BART-large model (137ms vs 229ms)
Performance Balance
Achieves a good balance between model size and summarization quality, with Rouge-L score only 0.5% lower than baseline
Multiple Configurations
Offers various parameter configurations (e.g., 12-1, 6-6, 12-3) to meet different scenario requirements

Model Capabilities

News Summarization
Long Text Compression
Key Information Extraction

Use Cases

Media Industry
Automatic News Summarization
Compress lengthy news reports into concise summaries
Achieves 22.12 Rouge-2 score on the XSum dataset
Content Analysis
Document Key Information Extraction
Extract core content from long documents
Retains over 90% of key information from the original document
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
Š 2025AIbase