D

Distilbart Cnn 12 6 Finetuned Weaksup 1000

Developed by cammy
A text summarization generation model fine-tuned on the distilbart-cnn-12-6 model, trained for 1000 steps with weakly supervised data
Downloads 79
Release Time : 3/2/2022

Model Overview

This model is a fine-tuned version of distilbart-cnn-12-6, focusing on text summarization generation tasks and employing weakly supervised learning

Model Features

Efficient Summarization Generation
Based on the DistilBART architecture, it reduces model size while maintaining performance
Weakly Supervised Learning
Fine-tuned using weakly supervised methods, suitable for scenarios with limited labeled data
Lightweight Model
Fewer parameters compared to the original BART model, with higher inference efficiency

Model Capabilities

Text Summarization Generation
Long Text Compression
Key Information Extraction

Use Cases

Content Summarization
News Summarization
Automatically generates concise summaries of news articles
Achieved a Rouge1 score of 25.92 on the test set
Document Summarization
Generates key point summaries for long documents
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
Š 2025AIbase