# Rouge Optimization
Flan T5 Base Peft Dialogue Summary Before
Apache-2.0
A fine-tuned version of the google/flan-t5-base model for dialogue summarization tasks, using PEFT for parameter-efficient fine-tuning
Text Generation
TensorBoard English

F
agoor97
20
0
Bodo Bart Large Summ
Apache-2.0
This model is a text summarization generation model fine-tuned on the Bodo legal summarization dataset based on facebook/bart-large
Text Generation
Transformers

B
Mwnthai
19
1
T5 Small Common Corpus Topic Batch
Apache-2.0
A text processing model fine-tuned based on the T5-small architecture, focusing on text generation and transformation for specific tasks
Large Language Model
Transformers

T
Pclanglais
21
2
Text Summarization Cnn
Apache-2.0
A text summarization generation model fine-tuned based on Falconsai/text_summarization, supporting the extraction of key information from long texts to generate concise summaries.
Text Generation
Transformers

T
vmkhoa2000
125
0
Autotrain Stripped Data Training Biobart 90151144184
This is an abstract generation model trained via the AutoTrain platform, based on the BioBART architecture, suitable for text summarization tasks.
Text Generation
Transformers Other

A
gpadam
27
0
Autotrain Financial Convo Summary 89094143854
This is a financial conversation summarization model trained using AutoTrain, specifically designed to extract key information from financial dialogues and generate concise summaries.
Text Generation
Transformers Other

A
jsonfin17
173
3
Sinmt5 Tuned
A summary generation model fine-tuned on an unknown dataset based on Google's mT5 model, supporting multilingual text summarization tasks
Text Generation
Transformers

S
Hamza-Ziyard
53
0
H2 Keywordextractor
This is a summary generation model trained on the AutoTrain platform, suitable for financial text summarization tasks.
Text Generation
Transformers Other

H
transformer3
6,592
47
Codet5 Base Generate Docstrings For Python Condensed
Apache-2.0
This model is a fine-tuned version of Salesforce/codet5-base, designed to generate docstrings for Python functions.
Text Generation
Transformers English

C
DunnBC22
17
2
T5 Pegasus Ch Ans
This is a T5-Pegasus architecture-based Chinese abstract generation model trained on the AutoTrain platform, suitable for extracting key information from text to generate summaries.
Text Generation
Transformers Chinese

T
lambdarw
13
0
Flan T5 Large Finetuned Openai Summarize From Feedback
Apache-2.0
A text summarization model fine-tuned on feedback-based summarization datasets, based on google/flan-t5-large
Text Generation
Transformers

F
mrm8488
50
6
Flan T5 Small Finetuned Openai Summarize From Feedback
Apache-2.0
This model is a fine-tuned version of google/flan-t5-small on the summarize_from_feedback dataset, primarily designed for text summarization tasks.
Text Generation
Transformers

F
mrm8488
33
9
Mt5 Summarize Japanese
Apache-2.0
A Japanese summarization model fine-tuned from google/mt5-small, specifically designed for news story summarization
Text Generation
Transformers Japanese

M
tsmatz
552
19
TGL 3
Apache-2.0
TGL-3 is a fine-tuned abstract generation model based on t5-small, trained on 23,000 openreview.net abstract data entries, supporting academic text summarization tasks.
Text Generation
Transformers

T
awesometeng
13
1
T5 Small Finetuned Cnn News
Apache-2.0
A text summarization model fine-tuned on the CNN/DailyMail news dataset based on the T5-small model, capable of generating concise summaries of news articles.
Text Generation
Transformers

T
shivaniNK8
25
0
T5 Small Finetuned Cnn V2
Apache-2.0
A text summarization generation model fine-tuned on the cnn_dailymail dataset based on the T5-small model
Text Generation
Transformers

T
ubikpt
20
1
T5 Small Finetuned Cnn
Apache-2.0
A text summarization model fine-tuned on the cnn_dailymail dataset based on the T5-small architecture, excelling in news summarization tasks
Text Generation
Transformers

T
ubikpt
55
0
T5 Small Finetuned Contradiction
Apache-2.0
A text generation model based on the T5-small architecture fine-tuned on the SNLI dataset, specializing in contradiction relation recognition and summarization tasks
Text Generation
Transformers

T
domenicrosati
21
2
Mt5 Small Finetuned Amazon En Zh TW
Apache-2.0
This model is a text summarization model fine-tuned on the Amazon dataset based on google/mt5-small, supporting English to Traditional Chinese abstract generation tasks.
Text Generation
Transformers

M
peterhsu
28
0
Distilbart Cnn 6 6
Apache-2.0
DistilBART is a distilled version of the BART model, optimized for text summarization tasks, significantly improving inference speed while maintaining high performance.
Text Generation English
D
sshleifer
48.17k
31
Distilbart Cnn 12 3
Apache-2.0
DistilBART is a distilled version of the BART model, focusing on text summarization tasks, significantly reducing model size and inference time while maintaining high performance.
Text Generation English
D
sshleifer
145
4
Distilbart Cnn 12 6 Finetuned Weaksup 1000
Apache-2.0
A text summarization generation model fine-tuned on the distilbart-cnn-12-6 model, trained for 1000 steps with weakly supervised data
Text Generation
Transformers

D
cammy
79
1
Distilbart Xsum 12 3
Apache-2.0
DistilBART is a distilled version of the BART model, specifically optimized for summarization tasks, significantly reducing model parameters and inference time while maintaining high performance.
Text Generation English
D
sshleifer
579
11
Distilbart Xsum 12 1
Apache-2.0
DistilBART is a distilled version of the BART model, focusing on text summarization tasks, significantly reducing model parameters and inference time while maintaining high performance.
Text Generation English
D
sshleifer
396
7
Mt5 Small Finetuned Arxiv Cs Finetuned Arxiv Cs Full
Apache-2.0
This model is a text summarization model fine-tuned on the arXiv computer science paper dataset based on mt5-small, excelling at generating concise summaries of technical content.
Text Generation
Transformers

M
shamikbose89
16
5
Bart Large Finetuned Xsum
MIT
A text generation model fine-tuned on the wsj_markets dataset based on the BART-large architecture, excelling in summarization tasks
Text Generation
Transformers

B
aristotletan
14
0
Distilbart Xsum 1 1
Apache-2.0
DistilBART is a distilled version of the BART model, optimized for text summarization tasks, significantly reducing model size and inference time while maintaining high performance.
Text Generation English
D
sshleifer
2,198
0
Bart Base Finetuned Pubmed
Apache-2.0
This model is a fine-tuned version of facebook/bart-base on the pub_med_summarization_dataset, primarily designed for medical literature summarization tasks.
Text Generation
Transformers

B
Kevincp560
141
0
Bart Large Finetuned Pubmed
Apache-2.0
A text generation model fine-tuned on biomedical paper abstract datasets based on the BART-large architecture
Text Generation
Transformers

B
Kevincp560
20
1
Featured Recommended AI Models