# ROUGE Optimization
T5 Small Finetuned Stock News 2
Apache-2.0
A text summarization generation model fine-tuned based on t5-small, performing well in the field of financial news
Text Generation
Transformers

T
Kallia
40
0
Meetingscript
Apache-2.0
A meeting transcript summarization model optimized based on the BigBird-Pegasus architecture, capable of processing meeting records up to 4096 tokens and generating concise summaries
Text Generation
Transformers English

M
Shaelois
21
1
T5 Small Title Ft
Apache-2.0
T5 Small is the compact version of Google's T5 (Text-to-Text Transfer Transformer) model, suitable for various natural language processing tasks.
Text Generation
Transformers English

T
swarup3204
25
0
Flant5summarize
This is a summarization model fine-tuned on the CNNDailyMail dataset based on the Flan T5 base model, capable of compressing news articles into concise summaries.
Text Generation English
F
Ismetdh
21
0
Bart Large Cnn Finetuned For Email And Text
MIT
BART Large CNN is a pre-trained model based on the BART architecture, specifically designed for text summarization tasks.
Text Generation English
B
vapit
17
0
Khmer Mt5 Summarization
MIT
This is an mT5 model fine-tuned for Khmer text summarization tasks, based on Google's mT5-small model. It was fine-tuned on a Khmer text dataset and can generate concise and semantically rich Khmer text summaries.
Text Generation
Transformers Other

K
songhieng
58
2
Pegasus Large Privacy Policy Summarization V2
MIT
Fine-tuned based on Google's Pegasus Large model, specifically designed for summarizing lengthy privacy policy documents into concise versions.
Text Generation
Transformers English

P
AryehRotberg
13
0
Fewshot Xsum Bart
MIT
A few-shot summarization generation model based on BART-large, trained with 100 samples from the XSUM dataset, demonstrating the potential of few-shot learning in summarization tasks.
Text Generation
F
bhargavis
19
1
PEGASUS Medium
MIT
PEGASUS Medium is a fine-tuned version of the PEGASUS model, specifically optimized for abstractive text summarization tasks on Indonesian news articles.
Text Generation Other
P
fatihfauzan26
87
1
Scientific Paper Summarizer
A specialized model for scientific paper abstract generation fine-tuned based on the PEGASUS architecture
Text Generation
S
Harsit
40
3
T5 Small Abstractive Summarizer
Apache-2.0
A text summarization model based on the T5-small architecture, fine-tuned on the multi_news dataset, excelling in generating abstractive summaries
Text Generation
Transformers

T
MK-5
80
0
Bart Base Job Info Summarizer
This model is a fine-tuned version of facebook/bart-base for job information summarization, specifically designed to generate compelling summaries from recruitment information.
Text Generation
Safetensors
B
avisena
1,961
4
Samsuntextsum
MIT
This model is based on the Pegasus architecture and fine-tuned on the SAMSUM dataset for English dialogue summarization.
Text Generation
Transformers English

S
neuronstarml
20
0
Test Push
Apache-2.0
distilvit is an image-to-text model based on a VIT image encoder and a distilled GPT-2 text decoder, capable of generating textual descriptions of images.
Image-to-Text
Transformers

T
tarekziade
17
0
Vit Base Patch16 224 Distilgpt2
Apache-2.0
DistilViT is an image caption generation model based on Vision Transformer (ViT) and distilled GPT-2, capable of converting images into textual descriptions.
Image-to-Text
Transformers

V
tarekziade
17
0
Lexlm Longformer BART Fixed V1
An abstractive summarization model fine-tuned on BART, specifically designed for processing lengthy legal documents using a multi-step summarization approach
Text Generation
Transformers English

L
MikaSie
15
2
Mt5 Small Finetuned Cnndailymail En
Apache-2.0
A summarization model fine-tuned on the cnn_dailymail dataset based on google/mt5-small
Text Generation
Transformers

M
Skier8402
16
0
Bart Summarizer Model
MIT
A text summarization model fine-tuned based on facebook/bart-base, excelling in generating concise and coherent summaries from lengthy texts.
Text Generation
Transformers English

B
KipperDev
30
3
Pegasus Indonesian Base Finetune
Apache-2.0
This model is an Indonesian text summarization model based on the PEGASUS architecture, fine-tuned on the Indosum, Liputan6, and XLSum datasets, suitable for news text summarization tasks.
Text Generation
Transformers Other

P
thonyyy
172
2
Rut5 Base Summ
A Russian text and dialogue summarization model fine-tuned based on ruT5-base, supporting multi-domain Russian text summarization tasks
Text Generation
Transformers Supports Multiple Languages

R
d0rj
207
22
Tst Summarization
News summarization model fine-tuned on google/pegasus-xsum, trained on the cnn_dailymail dataset
Text Generation
Transformers English

T
ChaniM
23
0
Long T5 Base Sumstew
A summarization model based on the Long-T5 architecture, supporting multilingual text summarization tasks.
Text Generation
Transformers Supports Multiple Languages

L
Joemgu
27
1
Sinmt5
This model is a multilingual summarization model based on the mT5 architecture, specifically fine-tuned for Sinhala to generate abstractive summaries of CNN Daily Mail Sinhala news.
Text Generation
Transformers

S
Hamza-Ziyard
14
0
Long T5 Base Govreport
Apache-2.0
A government report summarization model based on the Long-T5 architecture, specifically optimized for long document summarization tasks
Text Generation
Transformers English

L
AleBurzio
866
2
Mbart Large 50 Finetuned Stocks Event All
MIT
A summary generation model fine-tuned based on facebook/mbart-large-50, specializing in stock event summarization tasks
Text Generation
Transformers

M
jiaoqsh
18
0
Flan T5 Base Tldr News
A fine-tuned T5 model specialized for TLDR news article summarization and headline generation
Text Generation
Transformers English

F
ybagoury
16
2
Pegasus Multi News NewsSummarization BBC
A news summarization model fine-tuned based on Pegasus architecture, specifically optimized for BBC news content
Text Generation
Transformers English

P
DunnBC22
658
2
Led Base 16384 Billsum Summarization
This model is a fine-tuned version of led-base-16384 on the billsum dataset, specifically designed for long document summarization tasks.
Text Generation
Transformers Supports Multiple Languages

L
AlgorithmicResearchGroup
15
1
Article2kw Test1.2 Barthez Orangesum Title Finetuned For Summerization
Apache-2.0
A fine-tuned text summarization model based on barthez-orangesum-title, supporting keyword summarization for French texts
Text Generation
Transformers

A
bthomas
18
0
Autotrain Biomedical Sc Summ 1217846142
This is an abstract generation model trained based on AutoTrain, specifically designed for biomedical scientific literature.
Text Generation
Transformers Other

A
L-macc
13
0
Autotrain Hello There 1209845735
This is a text summarization model trained via AutoTrain, capable of automatically generating summaries from input text.
Text Generation
Transformers Other

A
Jacobsith
13
0
Long T5 Tglobal Base 16384 Booksum V11 Big Patent V2
Bsd-3-clause
A long-text summarization model based on the T5 architecture, capable of processing inputs up to 16,384 tokens, suitable for book and technical document summarization tasks.
Text Generation
Transformers

L
pszemraj
21
2
T5 Small Headline Generator
MIT
A fine-tuned headline generation model based on t5-small, used to generate concise headlines from news text
Text Generation
Transformers English

T
JulesBelveze
122
9
Lsg Bart Base 4096 Mediasum
BART-base model based on LSG technology, fine-tuned for long text summarization tasks on the MediaSum dataset, supporting sequence processing up to 4096 tokens
Text Generation
Transformers English

L
ccdv
44
0
Lsg Bart Base 4096 Multinews
A BART-base model based on LSG technology, designed for long-text summarization tasks, supporting input sequences of up to 4096 tokens
Text Generation
Transformers English

L
ccdv
26
4
Lsg Bart Base 4096 Wcep
Long text summarization model based on LSG-BART architecture, fine-tuned on the WCEP-10 dataset, supports processing long sequences of up to 4096 tokens
Text Generation
Transformers English

L
ccdv
27
2
Bart Base Booksum
Apache-2.0
A book summarization generation model based on the BART-base architecture and fine-tuned on the BookSum dataset
Text Generation
Transformers English

B
KamilAin
19
1
Lsg Bart Base 16384 Pubmed
A long-sequence text summarization model based on the BART architecture, specifically fine-tuned for the PubMed scientific paper dataset, capable of processing input sequences up to 16,384 tokens in length
Text Generation
Transformers English

L
ccdv
22
6
Lsg Bart Base 4096 Pubmed
A long-sequence processing model based on LSG attention mechanism, fine-tuned specifically for scientific paper summarization tasks
Text Generation
Transformers English

L
ccdv
21
3
Lsg Bart Base 16384 Arxiv
A long-sequence processing model based on the BART architecture, optimized for scientific paper summarization tasks, supporting long-text input up to 16,384 tokens
Text Generation
Transformers English

L
ccdv
29
5
- 1
- 2
Featured Recommended AI Models