# Long Text Summarization
Meetingscript
Apache-2.0
A meeting transcript summarization model optimized based on the BigBird-Pegasus architecture, capable of processing meeting records up to 4096 tokens and generating concise summaries
Text Generation
Transformers English

M
Shaelois
21
1
Mistral 7B Summarizer SFT GGUF
MIT
A text summarization model based on the Mistral 7B architecture, optimized for efficiency and performance using LoRA technology.
Text Generation English
M
SURESHBEEKHANI
65
0
Pegasus X Base Synthsumm Open 16k
Apache-2.0
A text summarization model fine-tuned based on pegasus-x-base, trained with synthetic data, excelling in long document summarization tasks.
Text Generation
Transformers English

P
BEE-spoke-data
115
2
Qwen2 1.5B Summarize
Apache-2.0
A specialized summarization model fine-tuned for 2 rounds based on Qwen2-1.5B-Instruct
Text Generation
Transformers English

Q
thepowerfuldeez
228
1
Bart Finetuned Conversational Summarization
MIT
A text summarization model fine-tuned based on the BART architecture, specifically designed for generating concise summaries of conversations and news articles.
Text Generation
Transformers English

B
Mr-Vicky-01
41
7
Bart Finetuned Text Summarization
MIT
A fine-tuned text summarization model based on the BART architecture, designed to generate concise and accurate summaries
Text Generation
Transformers English

B
suriya7
1,547
9
Long T5 Tglobal Base Synthsumm Direct
Apache-2.0
A text summarization model based on the Long-T5 architecture, fine-tuned using the synthetic dataset synthsumm, specializing in long text summarization tasks.
Text Generation
Transformers English

L
pszemraj
15
1
Llama 2 7B 32K
An open-source long-context language model fine-tuned based on Meta's original Llama-2 7B model, supporting 32K context length
Large Language Model
Transformers English

L
togethercomputer
5,411
538
Long T5 Base Sumstew
A summarization model based on the Long-T5 architecture, supporting multilingual text summarization tasks.
Text Generation
Transformers Supports Multiple Languages

L
Joemgu
27
1
Pegasus X Sumstew
Apache-2.0
An English long-text summarization model fine-tuned based on Pegasus-x-large, supporting abstractive summarization of complex texts such as academic manuscripts and meeting minutes
Text Generation
Transformers English

P
Joemgu
31
1
Autotrain Summarization Bart Longformer 54164127153
A text summarization model trained on the AutoTrain platform, utilizing the BART-Longformer architecture
Text Generation
Transformers Other

A
Udit191
16
0
Long T5 Tglobal Xl 16384 Book Summary
Bsd-3-clause
A LongT5-XL model fine-tuned on the BookSum dataset, specifically designed for long-text summarization, capable of generating SparkNotes-like summaries.
Text Generation
Transformers

L
pszemraj
58
19
Led Base 16384 Billsum Summarization
This model is a fine-tuned version of led-base-16384 on the billsum dataset, specifically designed for long document summarization tasks.
Text Generation
Transformers Supports Multiple Languages

L
AlgorithmicResearchGroup
15
1
Long T5 Tglobal Large Pubmed 3k Booksum 16384 WIP15
Bsd-3-clause
A large-scale summarization model based on the Long-T5 architecture, specifically optimized for book and long document summarization tasks
Text Generation
Transformers

L
pszemraj
17
0
Long T5 Tglobal Base 16384 Booksum V12
Bsd-3-clause
An optimized long-text summarization model based on the T5 architecture, capable of processing inputs up to 16,384 tokens, excelling in book summarization tasks.
Text Generation
Transformers

L
pszemraj
109
4
Long T5 Tglobal Large Pubmed 3k Booksum 16384 WIP
Apache-2.0
A large-scale summarization model based on the Long-T5 architecture, specifically optimized for long-document summarization tasks, supporting a context length of 16,384 tokens.
Text Generation
Transformers

L
pszemraj
65
1
Lsg Bart Base 16384 Mediasum
A BART model based on LSG technology, optimized for long-sequence summarization tasks, supporting input sequences up to 16,384 tokens in length
Text Generation
Transformers English

L
ccdv
22
2
Longt5 Tglobal Large 16384 Pubmed 3k Steps
Apache-2.0
LongT5 is a long-sequence text-to-text Transformer model based on T5, employing transient-global attention mechanism, suitable for long-text processing tasks.
Text Generation English
L
Stancld
1,264
22
Lsg Bart Base 4096 Mediasum
BART-base model based on LSG technology, fine-tuned for long text summarization tasks on the MediaSum dataset, supporting sequence processing up to 4096 tokens
Text Generation
Transformers English

L
ccdv
44
0
Lsg Bart Base 4096 Multinews
A BART-base model based on LSG technology, designed for long-text summarization tasks, supporting input sequences of up to 4096 tokens
Text Generation
Transformers English

L
ccdv
26
4
Lsg Bart Base 4096 Wcep
Long text summarization model based on LSG-BART architecture, fine-tuned on the WCEP-10 dataset, supports processing long sequences of up to 4096 tokens
Text Generation
Transformers English

L
ccdv
27
2
Lsg Bart Base 16384 Pubmed
A long-sequence text summarization model based on the BART architecture, specifically fine-tuned for the PubMed scientific paper dataset, capable of processing input sequences up to 16,384 tokens in length
Text Generation
Transformers English

L
ccdv
22
6
Lsg Bart Base 4096 Pubmed
A long-sequence processing model based on LSG attention mechanism, fine-tuned specifically for scientific paper summarization tasks
Text Generation
Transformers English

L
ccdv
21
3
Lsg Bart Base 16384 Arxiv
A long-sequence processing model based on the BART architecture, optimized for scientific paper summarization tasks, supporting long-text input up to 16,384 tokens
Text Generation
Transformers English

L
ccdv
29
5
Lsg Bart Large 4096
The LSG model is an improved long-sequence processing model based on BART-large, utilizing local + sparse + global attention mechanisms for efficient handling of long-text tasks
Text Generation
Transformers English

L
ccdv
15
0
Pegasus Large Summary Explain
Apache-2.0
A large-scale summarization model based on the PEGASUS architecture, fine-tuned on the booksum dataset, excels at generating easy-to-understand SparkNotes-style summaries
Text Generation
Transformers English

P
pszemraj
19
4
Bigbird Pegasus Large Bigpatent
Apache-2.0
BigBird is a Transformer model based on sparse attention, capable of processing sequences up to 4096 in length, suitable for tasks like long document summarization.
Text Generation
Transformers English

B
google
945
40
Bigbird Pegasus Large Arxiv
Apache-2.0
BigBird is a Transformer model based on sparse attention, capable of handling longer sequences, suitable for tasks like long document summarization.
Text Generation
Transformers English

B
google
8,528
61
Bigbird Pegasus Large Pubmed
Apache-2.0
BigBirdPegasus is a Transformer model based on sparse attention, capable of handling longer sequences, especially suitable for long document summarization tasks.
Text Generation
Transformers English

B
google
2,031
47
Pegasus Summarization
Pegasus is a Transformer-based sequence-to-sequence model specifically designed for text summarization tasks. This model is based on Google's Pegasus architecture and fine-tuned to generate high-quality summaries.
Text Generation
Transformers

P
AlekseyKulnevich
34
0
Featured Recommended AI Models