Long T5 Tglobal Base 16384 Booksum V12
基于T5架构优化的长文本摘要生成模型,支持处理长达16384个token的输入,在书籍摘要任务上表现优异。
下载量 109
发布时间 : 9/9/2022
模型简介
该模型专门针对长文档摘要任务优化,采用T5架构并扩展了处理长文本的能力,适用于书籍、科学论文等长篇内容的概括生成。
模型特点
超长上下文处理
支持处理长达16384个token的输入文本,适合书籍章节等超长内容
专业领域优化
在BookSum数据集上专门训练,对学术文献和书籍内容摘要效果显著
多尺度摘要
可生成不同长度的摘要(8-64个token),满足多样化需求
模型能力
长文本摘要生成
内容概括
书籍章节摘要
科学论文摘要
技术文档概括
使用案例
学术研究
论文快速阅读
为长篇学术论文生成简明摘要,帮助研究者快速把握核心内容
在科学论文摘要任务上ROUGE-1得分30.00
出版行业
书籍内容摘要
自动生成书籍章节摘要,用于目录、导读等出版场景
在BookSum数据集上ROUGE-1得分36.14
政府报告
政策文件摘要
对长篇政府报告进行关键信息提取
在gov_report数据集上ROUGE-1得分37.05
🚀 pszemraj/long - t5 - tglobal - base - 16384 - booksum - V12模型
该模型主要用于文本摘要任务,能处理长文档的摘要生成,在多个数据集上有较好的表现,采用了特定的参数设置以优化性能。
🚀 快速开始
本模型适用于文本摘要任务,可处理如长文档、科学论文等多种类型的文本。以下是一些使用示例:
地震相关文本
large earthquakes along a given fault segment do not occur at random intervals
because it takes time to accumulate the strain energy for the rupture. The rates
at which tectonic plates move and accumulate strain at their boundaries are approximately
uniform. Therefore, in first approximation, one may expect that large ruptures
of the same fault segment will occur at approximately constant time intervals.
If subsequent main shocks have different amounts of slip across the fault, then
the recurrence time may vary, and the basic idea of periodic mainshocks must be
modified. For great plate boundary ruptures the length and slip often vary by
a factor of 2. Along the southern segment of the San Andreas fault the recurrence
interval is 145 years with variations of several decades. The smaller the standard
deviation of the average recurrence interval, the more specific could be the long
term prediction of a future mainshock.
科学论文相关文本
A typical feed - forward neural field algorithm. Spatiotemporal coordinates
are fed into a neural network that predicts values in the reconstructed domain.
Then, this domain is mapped to the sensor domain where sensor measurements are
available as supervision. Class and Section Problems Addressed Generalization
(Section 2) Inverse problems, ill - posed problems, editability; symmetries. Hybrid
Representations (Section 3) Computation & memory efficiency, representation capacity,
editability: Forward Maps (Section 4) Inverse problems Network Architecture (Section
5) Spectral bias, integration & derivatives. Manipulating Neural Fields (Section
6) Edit ability, constraints, regularization. Table 2: The five classes of techniques
in the neural field toolbox each addresses problems that arise in learning, inference,
and control. (Section 3). We can supervise reconstruction via differentiable forward
maps that transform Or project our domain (e.g, 3D reconstruction via 2D images;
Section 4) With appropriate network architecture choices, we can overcome neural
network spectral biases (blurriness) and efficiently compute derivatives and integrals
(Section 5). Finally, we can manipulate neural fields to add constraints and regularizations,
and to achieve editable representations (Section 6). Collectively, these classes
constitute a ''toolbox'' of techniques to help solve problems with neural fields
There are three components in a conditional neural field: (1) An encoder or inference
function € that outputs the conditioning latent variable 2 given an observation
0 E(0) =2. 2 is typically a low - dimensional vector, and is often referred to aS
a latent code Or feature code_ (2) A mapping function 4 between Z and neural field
parameters O: Y(z) = O; (3) The neural field itself $. The encoder € finds the
most probable z given the observations O: argmaxz P(2/0). The decoder maximizes
the inverse conditional probability to find the most probable 0 given Z: arg -
max P(Olz). We discuss different encoding schemes with different optimality guarantees
(Section 2.1.1), both global and local conditioning (Section 2.1.2), and different
mapping functions Y (Section 2.1.3) 2. Generalization Suppose we wish to estimate
a plausible 3D surface shape given a partial or noisy point cloud. We need a suitable
prior over the sur - face in its reconstruction domain to generalize to the partial
observations. A neural network expresses a prior via the function space of its
architecture and parameters 0, and generalization is influenced by the inductive
bias of this function space (Section 5).
转录音频 - 讲座相关文本
Is a else or outside the cob and tree written being of early client rope
and you have is for good reasons. On to the ocean in Orange for time. By''s the
aggregate we can bed it yet. Why this please pick up on a sort is do and also
M Getoi''s nerocos and do rain become you to let so is his brother is made in
use and Mjulia''s''s the lay major is aging Masastup coin present sea only of
Oosii rooms set to you We do er do we easy this private oliiishs lonthen might
be okay. Good afternoon everybody. Welcome to this lecture of Computational Statistics.
As you can see, I''m not socially my name is Michael Zelinger. I''m one of the
task for this class and you might have already seen me in the first lecture where
I made a quick appearance. I''m also going to give the tortillas in the last third
of this course. So to give you a little bit about me, I''m a old student here
with better Bulman and my research centres on casual inference applied to biomedical
disasters, so that could be genomics or that could be hospital data. If any of
you is interested in writing a bachelor thesis, a semester paper may be mastathesis
about this topic feel for reach out to me. you have my name on models and my email
address you can find in the directory I''d Be very happy to talk about it. you
do not need to be sure about it, we can just have a chat. So with that said, let''s
get on with the lecture. There''s an exciting topic today I''m going to start
by sharing some slides with you and later on during the lecture we''ll move to
the paper. So bear with me for a few seconds. Well, the projector is starting
up. Okay, so let''s get started. Today''s topic is a very important one. It''s
about a technique which really forms one of the fundamentals of data science,
machine learning, and any sort of modern statistics. It''s called cross validation.
I know you really want to understand this topic I Want you to understand this
and frankly, nobody''s gonna leave Professor Mineshousen''s class without understanding
cross validation. So to set the stage for this, I Want to introduce you to the
validation problem in computational statistics. So the problem is the following:
You trained a model on available data. You fitted your model, but you know the
training data you got could always have been different and some data from the
environment. Maybe it''s a random process. You do not really know what it is,
but you know that somebody else who gets a different batch of data from the same
environment they would get slightly different training data and you do not care
that your method performs as well. On this training data. you want to to perform
well on other data that you have not seen other data from the same environment.
So in other words, the validation problem is you want to quantify the performance
of your model on data that you have not seen. So how is this even possible? How
could you possibly measure the performance on data that you do not know The solution
to? This is the following realization is that given that you have a bunch of data,
you were in charge. You get to control how much that your model sees. It works
in the following way: You can hide data firms model. Let''s say you have a training
data set which is a bunch of doubtless so X eyes are the features those are typically
hide and national vector. It''s got more than one dimension for sure. And the
why why eyes. Those are the labels for supervised learning. As you''ve seen before,
it''s the same set up as we have in regression. And so you have this training
data and now you choose that you only use some of those data to fit your model.
You''re not going to use everything, you only use some of it the other part you
hide from your model. And then you can use this hidden data to do validation from
the point of you of your model. This hidden data is complete by unseen. In other
words, we solve our problem of validation.
BigBird博客介绍相关文本
Transformer - based models have shown to be very useful for many NLP tasks.
However, a major limitation of transformers - based models is its O(n^2)O(n 2) time
& memory complexity (where nn is sequence length). Hence, it''s computationally
very expensive to apply transformer - based models on long sequences n > 512n>512.
Several recent papers, e.g. Longformer, Performer, Reformer, Clustered attention
try to remedy this problem by approximating the full attention matrix. You can
checkout 🤗''s recent blog post in case you are unfamiliar with these models.
BigBird (introduced in paper) is one of such recent models to address this issue.
BigBird relies on block sparse attention instead of normal attention (i.e. BERT''s
attention) and can handle sequences up to a length of 4096 at a much lower computational
cost compared to BERT. It has achieved SOTA on various tasks involving very long
sequences such as long documents summarization, question - answering with long contexts.
BigBird RoBERTa - like model is now available in 🤗Transformers. The goal of this
post is to give the reader an in - depth understanding of big bird implementation
& ease one''s life in using BigBird with 🤗Transformers. But, before going into
more depth, it is important to remember that the BigBird''s attention is an approximation
of BERT''s full attention and therefore does not strive to be better than BERT''s
full attention, but rather to be more efficient. It simply allows to apply transformer - based
models to much longer sequences since BERT''s quadratic memory requirement quickly
becomes unbearable. Simply put, if we would have ∞ compute & ∞ time, BERT''s attention
would be preferred over block sparse attention (which we are going to discuss
in this post).
If you wonder why we need more compute when working with longer sequences, this
blog post is just right for you!
Some of the main questions one might have when working with standard BERT - like
attention include:
Do all tokens really have to attend to all other tokens? Why not compute attention
only over important tokens? How to decide what tokens are important? How to attend
to just a few tokens in a very efficient way? In this blog post, we will try to
answer those questions.
What tokens should be attended to? We will give a practical example of how attention
works by considering the sentence ''BigBird is now available in HuggingFace for
extractive question answering''. In BERT - like attention, every word would simply
attend to all other tokens.
Let''s think about a sensible choice of key tokens that a queried token actually
only should attend to by writing some pseudo - code. Will will assume that the token
available is queried and build a sensible list of key tokens to attend to.
>>> # let''s consider following sentence as an example >>> example = [''BigBird'',
''is'', ''now'', ''available'', ''in'', ''HuggingFace'', ''for'', ''extractive'',
''question'', ''answering'']
>>> # further let''s assume, we''re trying to understand the representation of
''available'' i.e. >>> query_token = ''available'' >>> # We will initialize an
empty `set` and fill up the tokens of our interest as we proceed in this section.
>>> key_tokens = [] # => currently ''available'' token doesn''t have anything
to attend Nearby tokens should be important because, in a sentence (sequence of
words), the current word is highly dependent on neighboring past & future tokens.
This intuition is the idea behind the concept of sliding attention.
Rick and Morty相关文本
To be fair, you have to have a very high IQ to understand Rick and Morty.
The humour is extremely subtle, and without a solid grasp of theoretical physics
most of the jokes will go over a typical viewer''s head. There''s also Rick''s
nihilistic outlook, which is deftly woven into his characterisation - his personal
philosophy draws heavily from Narodnaya Volya literature, for instance. The fans
understand this stuff; they have the intellectual capacity to truly appreciate
the depths of these jokes, to realise that they''re not just funny - they say something
deep about LIFE. As a consequence people who dislike Rick & Morty truly ARE idiots -
of course they wouldn''t appreciate, for instance, the humour in Rick''s existential
catchphrase ''Wubba Lubba Dub Dub,'' which itself is a cryptic reference to Turgenev''s
Russian epic Fathers and Sons. I''m smirking right now just imagining one of those
addlepated simpletons scratching their heads in confusion as Dan Harmon''s genius
wit unfolds itself on their television screens. What fools.. how I pity them.
😂
And yes, by the way, i DO have a Rick & Morty tattoo. And no, you cannot see it.
It''s for the ladies'' eyes only - and even then they have to demonstrate that
they''re within 5 IQ points of my own (preferably lower) beforehand. Nothin personnel
kid 😎
✨ 主要特性
- 适用场景广泛:可用于多种类型文本的摘要生成,如地震相关文本、科学论文、转录音频讲座、博客介绍等。
- 处理长序列能力:能够处理较长的文本序列,在长文档摘要任务中表现出色。
- 优化计算成本:采用特定的注意力机制(如BigBird的块稀疏注意力),降低计算成本。
📦 安装指南
文档未提供相关安装步骤,故跳过。
💻 使用示例
文档未提供代码示例,故跳过。
📚 详细文档
模型参数
属性 | 详情 |
---|---|
最大长度 | 64 |
最小长度 | 8 |
无重复n元语法大小 | 3 |
提前停止 | 开启 |
重复惩罚 | 3.5 |
长度惩罚 | 0.3 |
编码器无重复n元语法大小 | 3 |
束搜索数量 | 4 |
模型评估结果
samsum数据集
指标类型 | 指标名称 | 数值 |
---|---|---|
rouge | ROUGE - 1 | 30.0032 |
rouge | ROUGE - 2 | 7.2671 |
rouge | ROUGE - L | 21.8779 |
rouge | ROUGE - LSUM | 26.4371 |
loss | loss | 2.6383285522460938 |
gen_len | gen_len | 54.2357 |
launch/gov_report数据集
指标类型 | 指标名称 | 数值 |
---|---|---|
rouge | ROUGE - 1 | 37.0538 |
rouge | ROUGE - 2 | 8.1512 |
rouge | ROUGE - L | 17.6645 |
rouge | ROUGE - LSUM | 33.4275 |
loss | loss | 2.6052205562591553 |
gen_len | gen_len | 201.5951 |
kmfoda/booksum数据集
指标类型 | 指标名称 | 数值 |
---|---|---|
rouge | ROUGE - 1 | 36.1423 |
rouge | ROUGE - 2 | 5.634 |
rouge | ROUGE - L | 16.3747 |
rouge | ROUGE - LSUM | 33.0665 |
loss | loss | 2.454127550125122 |
gen_len | gen_len | 239.4179 |
🔧 技术细节
文档未提供相关技术细节,故跳过。
📄 许可证
本模型使用的许可证如下:
- Apache - 2.0
- BSD - 3 - Clause
Bart Large Cnn
MIT
基于英语语料预训练的BART模型,专门针对CNN每日邮报数据集进行微调,适用于文本摘要任务
文本生成 英语
B
facebook
3.8M
1,364
Parrot Paraphraser On T5
Parrot是一个基于T5的释义框架,专为加速训练自然语言理解(NLU)模型而设计,通过生成高质量释义实现数据增强。
文本生成
Transformers

P
prithivida
910.07k
152
Distilbart Cnn 12 6
Apache-2.0
DistilBART是BART模型的蒸馏版本,专门针对文本摘要任务进行了优化,在保持较高性能的同时显著提升了推理速度。
文本生成 英语
D
sshleifer
783.96k
278
T5 Base Summarization Claim Extractor
基于T5架构的模型,专门用于从摘要文本中提取原子声明,是摘要事实性评估流程的关键组件。
文本生成
Transformers 英语

T
Babelscape
666.36k
9
Unieval Sum
UniEval是一个统一的多维评估器,用于自然语言生成任务的自动评估,支持多个可解释维度的评估。
文本生成
Transformers

U
MingZhong
318.08k
3
Pegasus Paraphrase
Apache-2.0
基于PEGASUS架构微调的文本复述模型,能够生成语义相同但表达不同的句子。
文本生成
Transformers 英语

P
tuner007
209.03k
185
T5 Base Korean Summarization
这是一个基于T5架构的韩语文本摘要模型,专为韩语文本摘要任务设计,通过微调paust/pko-t5-base模型在多个韩语数据集上训练而成。
文本生成
Transformers 韩语

T
eenzeenee
148.32k
25
Pegasus Xsum
PEGASUS是一种基于Transformer的预训练模型,专门用于抽象文本摘要任务。
文本生成 英语
P
google
144.72k
198
Bart Large Cnn Samsum
MIT
基于BART-large架构的对话摘要模型,专为SAMSum语料库微调,适用于生成对话摘要。
文本生成
Transformers 英语

B
philschmid
141.28k
258
Kobart Summarization
MIT
基于KoBART架构的韩语文本摘要模型,能够生成韩语新闻文章的简洁摘要。
文本生成
Transformers 韩语

K
gogamza
119.18k
12
精选推荐AI模型
Llama 3 Typhoon V1.5x 8b Instruct
专为泰语设计的80亿参数指令模型,性能媲美GPT-3.5-turbo,优化了应用场景、检索增强生成、受限生成和推理任务
大型语言模型
Transformers 支持多种语言

L
scb10x
3,269
16
Cadet Tiny
Openrail
Cadet-Tiny是一个基于SODA数据集训练的超小型对话模型,专为边缘设备推理设计,体积仅为Cosmo-3B模型的2%左右。
对话系统
Transformers 英语

C
ToddGoldfarb
2,691
6
Roberta Base Chinese Extractive Qa
基于RoBERTa架构的中文抽取式问答模型,适用于从给定文本中提取答案的任务。
问答系统 中文
R
uer
2,694
98