# Sequence-to-Sequence
Khmer Mt5 Summarization
MIT
This is an mT5 model fine-tuned for Khmer text summarization tasks, based on Google's mT5-small model. It was fine-tuned on a Khmer text dataset and can generate concise and semantically rich Khmer text summaries.
Text Generation
Transformers Other

K
songhieng
58
2
M2m100 1.2B Ctranslate2
MIT
M2M100 is a multilingual encoder-decoder model supporting direct translation between 100 languages with 1.2 billion parameters.
Machine Translation Supports Multiple Languages
M
entai2965
92
3
T5 Query Reformulation RL
Apache-2.0
This is a generative model specifically designed for search query rewriting, employing a sequence-to-sequence architecture and reinforcement learning framework to produce diverse and relevant query rewrites.
Large Language Model
Transformers Supports Multiple Languages

T
prhegde
366
6
Sst T5 Base
Apache-2.0
A text generation model fine-tuned on the SST dataset based on T5-base
Large Language Model
Transformers

S
kennethge123
17
2
Bart Large Cnn Samsum
Apache-2.0
A dialogue summarization generation model fine-tuned on the SAMSum dataset based on the BART-large architecture
Text Generation
Transformers

B
AdamCodd
18
2
Bart Base Cnn Swe
MIT
A Swedish summarization model based on the BART architecture, fine-tuned on the CNN Daily Swedish dataset
Text Generation
Transformers Other

B
Gabriel
31
1
Bart Large Xsum Samsum
Apache-2.0
This is a sequence-to-sequence model based on the BART architecture, specifically designed for dialogue summarization tasks.
Text Generation
Transformers English

B
lidiya
10.96k
38
Bart Base Cnn
Apache-2.0
This model is a bart-base model fine-tuned on the CNN/DailyMail summarization dataset, excelling in text summarization tasks
Text Generation
Transformers English

B
ainize
749
15
Bart Base
Apache-2.0
BART is a Transformer model combining a bidirectional encoder and an autoregressive decoder, suitable for text generation and understanding tasks.
Large Language Model English
B
facebook
2.1M
183
Kobart Base V1
MIT
KoBART is a Korean pretrained model based on the BART architecture, suitable for various Korean natural language processing tasks.
Large Language Model
Transformers Korean

K
gogamza
2,077
1
It5 Base News Summarization
Apache-2.0
An Italian news summarization model fine-tuned from the IT5 base model, capable of extracting key information from news texts to generate concise summaries.
Text Generation Other
I
gsarti
405
5
Bart Base Chinese
A pre-trained asymmetric Transformer model for Chinese understanding and generation, supporting text-to-text generation tasks
Large Language Model
Transformers Chinese

B
fnlp
6,504
99
Bart Paraphrase
Apache-2.0
A large BART sequence-to-sequence (text generation) model fine-tuned on 3 paraphrase datasets for sentence rewriting tasks.
Text Generation
Transformers English

B
eugenesiow
2,334
30
Bart Base Samsum
Apache-2.0
This model is based on the BART architecture, fine-tuned on the SAMSum dialogue dataset for abstractive text summarization, specifically designed for generating dialogue summaries.
Text Generation
Transformers English

B
lidiya
77
4
Mbart Large 50 Finetuned Opus Pt En Translation
This model is a Portuguese-English translation model fine-tuned on the OPUS100 dataset for Neural Machine Translation (NMT) tasks, based on the mBART-50 large model.
Machine Translation
Transformers Supports Multiple Languages

M
Narrativa
126
5
Featured Recommended AI Models