Distilbart Cnn 6 6
A lightweight text summarization model based on the BART architecture, compressing the original model size through knowledge distillation while retaining core summarization capabilities
Downloads 1,260
Release Time : 5/2/2023
Model Overview
This model is a distilled version of BART, specifically optimized for text summarization tasks, capable of maintaining high summarization quality while significantly reducing computational resource requirements
Model Features
Lightweight and efficient
Reduces the original BART model size by half through knowledge distillation, significantly lowering computational resource demands
Retains core capabilities
Despite the reduced model size, it maintains over 90% of the original model's summarization quality
Web-optimized
Provides ONNX format weights, supporting execution in browser environments via Transformers.js
Model Capabilities
Generative text summarization
Long-text compression
Key information extraction
Use Cases
Content summarization
News summarization
Automatically generates key-point summaries of news articles
Produces concise 3-5 sentence summaries containing core information
Document condensation
Compresses technical documents or research reports into executive summaries
Preserves key conclusions and technical parameters
Content preprocessing
Search enhancement
Generates webpage content summaries for search engines
Improves readability and information density of search results
Featured Recommended AI Models
Š 2025AIbase