Distilbart Cnn 12 6
DistilBART-CNN-12-6 is a distilled version of the BART model, specifically designed for text summarization tasks, featuring a smaller model size and higher inference efficiency.
Downloads 18
Release Time : 1/8/2025
Model Overview
This model is a lightweight version based on the BART architecture, retaining the core capabilities of the original model through knowledge distillation while significantly reducing the number of parameters. It is primarily used for generating text summaries and is suitable for scenarios requiring rapid processing of large volumes of text.
Model Features
Lightweight and Efficient
Reduces model size through distillation while maintaining good summarization quality.
ONNX Format Support
ONNX weights specifically converted for Transformers.js, facilitating web deployment.
Fast Inference
Offers faster inference speed compared to the original BART model.
Model Capabilities
Text Summarization Generation
Long Text Compression
Key Information Extraction
Use Cases
Content Summarization
News Summarization
Automatically generates concise summaries of news articles.
Retains key information and removes redundant content.
Document Summarization
Generates executive summaries for long documents.
Helps quickly understand the core content of documents.
Information Processing
Meeting Minutes Summarization
Extracts key decisions and action items from meeting minutes.
Improves meeting efficiency and facilitates follow-up.
Featured Recommended AI Models
Š 2025AIbase