Long T5 Tglobal Base 16384 Booksum V12
基於T5架構優化的長文本摘要生成模型,支持處理長達16384個token的輸入,在書籍摘要任務上表現優異。
下載量 109
發布時間 : 9/9/2022
模型概述
該模型專門針對長文檔摘要任務優化,採用T5架構並擴展了處理長文本的能力,適用於書籍、科學論文等長篇內容的概括生成。
模型特點
超長上下文處理
支持處理長達16384個token的輸入文本,適合書籍章節等超長內容
專業領域優化
在BookSum數據集上專門訓練,對學術文獻和書籍內容摘要效果顯著
多尺度摘要
可生成不同長度的摘要(8-64個token),滿足多樣化需求
模型能力
長文本摘要生成
內容概括
書籍章節摘要
科學論文摘要
技術文檔概括
使用案例
學術研究
論文快速閱讀
為長篇學術論文生成簡明摘要,幫助研究者快速把握核心內容
在科學論文摘要任務上ROUGE-1得分30.00
出版行業
書籍內容摘要
自動生成書籍章節摘要,用於目錄、導讀等出版場景
在BookSum數據集上ROUGE-1得分36.14
政府報告
政策文件摘要
對長篇政府報告進行關鍵信息提取
在gov_report數據集上ROUGE-1得分37.05
🚀 pszemraj/long - t5 - tglobal - base - 16384 - booksum - V12模型
該模型主要用於文本摘要任務,能處理長文檔的摘要生成,在多個數據集上有較好的表現,採用了特定的參數設置以優化性能。
🚀 快速開始
本模型適用於文本摘要任務,可處理如長文檔、科學論文等多種類型的文本。以下是一些使用示例:
地震相關文本
large earthquakes along a given fault segment do not occur at random intervals
because it takes time to accumulate the strain energy for the rupture. The rates
at which tectonic plates move and accumulate strain at their boundaries are approximately
uniform. Therefore, in first approximation, one may expect that large ruptures
of the same fault segment will occur at approximately constant time intervals.
If subsequent main shocks have different amounts of slip across the fault, then
the recurrence time may vary, and the basic idea of periodic mainshocks must be
modified. For great plate boundary ruptures the length and slip often vary by
a factor of 2. Along the southern segment of the San Andreas fault the recurrence
interval is 145 years with variations of several decades. The smaller the standard
deviation of the average recurrence interval, the more specific could be the long
term prediction of a future mainshock.
科學論文相關文本
A typical feed - forward neural field algorithm. Spatiotemporal coordinates
are fed into a neural network that predicts values in the reconstructed domain.
Then, this domain is mapped to the sensor domain where sensor measurements are
available as supervision. Class and Section Problems Addressed Generalization
(Section 2) Inverse problems, ill - posed problems, editability; symmetries. Hybrid
Representations (Section 3) Computation & memory efficiency, representation capacity,
editability: Forward Maps (Section 4) Inverse problems Network Architecture (Section
5) Spectral bias, integration & derivatives. Manipulating Neural Fields (Section
6) Edit ability, constraints, regularization. Table 2: The five classes of techniques
in the neural field toolbox each addresses problems that arise in learning, inference,
and control. (Section 3). We can supervise reconstruction via differentiable forward
maps that transform Or project our domain (e.g, 3D reconstruction via 2D images;
Section 4) With appropriate network architecture choices, we can overcome neural
network spectral biases (blurriness) and efficiently compute derivatives and integrals
(Section 5). Finally, we can manipulate neural fields to add constraints and regularizations,
and to achieve editable representations (Section 6). Collectively, these classes
constitute a ''toolbox'' of techniques to help solve problems with neural fields
There are three components in a conditional neural field: (1) An encoder or inference
function € that outputs the conditioning latent variable 2 given an observation
0 E(0) =2. 2 is typically a low - dimensional vector, and is often referred to aS
a latent code Or feature code_ (2) A mapping function 4 between Z and neural field
parameters O: Y(z) = O; (3) The neural field itself $. The encoder € finds the
most probable z given the observations O: argmaxz P(2/0). The decoder maximizes
the inverse conditional probability to find the most probable 0 given Z: arg -
max P(Olz). We discuss different encoding schemes with different optimality guarantees
(Section 2.1.1), both global and local conditioning (Section 2.1.2), and different
mapping functions Y (Section 2.1.3) 2. Generalization Suppose we wish to estimate
a plausible 3D surface shape given a partial or noisy point cloud. We need a suitable
prior over the sur - face in its reconstruction domain to generalize to the partial
observations. A neural network expresses a prior via the function space of its
architecture and parameters 0, and generalization is influenced by the inductive
bias of this function space (Section 5).
轉錄音頻 - 講座相關文本
Is a else or outside the cob and tree written being of early client rope
and you have is for good reasons. On to the ocean in Orange for time. By''s the
aggregate we can bed it yet. Why this please pick up on a sort is do and also
M Getoi''s nerocos and do rain become you to let so is his brother is made in
use and Mjulia''s''s the lay major is aging Masastup coin present sea only of
Oosii rooms set to you We do er do we easy this private oliiishs lonthen might
be okay. Good afternoon everybody. Welcome to this lecture of Computational Statistics.
As you can see, I''m not socially my name is Michael Zelinger. I''m one of the
task for this class and you might have already seen me in the first lecture where
I made a quick appearance. I''m also going to give the tortillas in the last third
of this course. So to give you a little bit about me, I''m a old student here
with better Bulman and my research centres on casual inference applied to biomedical
disasters, so that could be genomics or that could be hospital data. If any of
you is interested in writing a bachelor thesis, a semester paper may be mastathesis
about this topic feel for reach out to me. you have my name on models and my email
address you can find in the directory I''d Be very happy to talk about it. you
do not need to be sure about it, we can just have a chat. So with that said, let''s
get on with the lecture. There''s an exciting topic today I''m going to start
by sharing some slides with you and later on during the lecture we''ll move to
the paper. So bear with me for a few seconds. Well, the projector is starting
up. Okay, so let''s get started. Today''s topic is a very important one. It''s
about a technique which really forms one of the fundamentals of data science,
machine learning, and any sort of modern statistics. It''s called cross validation.
I know you really want to understand this topic I Want you to understand this
and frankly, nobody''s gonna leave Professor Mineshousen''s class without understanding
cross validation. So to set the stage for this, I Want to introduce you to the
validation problem in computational statistics. So the problem is the following:
You trained a model on available data. You fitted your model, but you know the
training data you got could always have been different and some data from the
environment. Maybe it''s a random process. You do not really know what it is,
but you know that somebody else who gets a different batch of data from the same
environment they would get slightly different training data and you do not care
that your method performs as well. On this training data. you want to to perform
well on other data that you have not seen other data from the same environment.
So in other words, the validation problem is you want to quantify the performance
of your model on data that you have not seen. So how is this even possible? How
could you possibly measure the performance on data that you do not know The solution
to? This is the following realization is that given that you have a bunch of data,
you were in charge. You get to control how much that your model sees. It works
in the following way: You can hide data firms model. Let''s say you have a training
data set which is a bunch of doubtless so X eyes are the features those are typically
hide and national vector. It''s got more than one dimension for sure. And the
why why eyes. Those are the labels for supervised learning. As you''ve seen before,
it''s the same set up as we have in regression. And so you have this training
data and now you choose that you only use some of those data to fit your model.
You''re not going to use everything, you only use some of it the other part you
hide from your model. And then you can use this hidden data to do validation from
the point of you of your model. This hidden data is complete by unseen. In other
words, we solve our problem of validation.
BigBird博客介紹相關文本
Transformer - based models have shown to be very useful for many NLP tasks.
However, a major limitation of transformers - based models is its O(n^2)O(n 2) time
& memory complexity (where nn is sequence length). Hence, it''s computationally
very expensive to apply transformer - based models on long sequences n > 512n>512.
Several recent papers, e.g. Longformer, Performer, Reformer, Clustered attention
try to remedy this problem by approximating the full attention matrix. You can
checkout 🤗''s recent blog post in case you are unfamiliar with these models.
BigBird (introduced in paper) is one of such recent models to address this issue.
BigBird relies on block sparse attention instead of normal attention (i.e. BERT''s
attention) and can handle sequences up to a length of 4096 at a much lower computational
cost compared to BERT. It has achieved SOTA on various tasks involving very long
sequences such as long documents summarization, question - answering with long contexts.
BigBird RoBERTa - like model is now available in 🤗Transformers. The goal of this
post is to give the reader an in - depth understanding of big bird implementation
& ease one''s life in using BigBird with 🤗Transformers. But, before going into
more depth, it is important to remember that the BigBird''s attention is an approximation
of BERT''s full attention and therefore does not strive to be better than BERT''s
full attention, but rather to be more efficient. It simply allows to apply transformer - based
models to much longer sequences since BERT''s quadratic memory requirement quickly
becomes unbearable. Simply put, if we would have ∞ compute & ∞ time, BERT''s attention
would be preferred over block sparse attention (which we are going to discuss
in this post).
If you wonder why we need more compute when working with longer sequences, this
blog post is just right for you!
Some of the main questions one might have when working with standard BERT - like
attention include:
Do all tokens really have to attend to all other tokens? Why not compute attention
only over important tokens? How to decide what tokens are important? How to attend
to just a few tokens in a very efficient way? In this blog post, we will try to
answer those questions.
What tokens should be attended to? We will give a practical example of how attention
works by considering the sentence ''BigBird is now available in HuggingFace for
extractive question answering''. In BERT - like attention, every word would simply
attend to all other tokens.
Let''s think about a sensible choice of key tokens that a queried token actually
only should attend to by writing some pseudo - code. Will will assume that the token
available is queried and build a sensible list of key tokens to attend to.
>>> # let''s consider following sentence as an example >>> example = [''BigBird'',
''is'', ''now'', ''available'', ''in'', ''HuggingFace'', ''for'', ''extractive'',
''question'', ''answering'']
>>> # further let''s assume, we''re trying to understand the representation of
''available'' i.e. >>> query_token = ''available'' >>> # We will initialize an
empty `set` and fill up the tokens of our interest as we proceed in this section.
>>> key_tokens = [] # => currently ''available'' token doesn''t have anything
to attend Nearby tokens should be important because, in a sentence (sequence of
words), the current word is highly dependent on neighboring past & future tokens.
This intuition is the idea behind the concept of sliding attention.
Rick and Morty相關文本
To be fair, you have to have a very high IQ to understand Rick and Morty.
The humour is extremely subtle, and without a solid grasp of theoretical physics
most of the jokes will go over a typical viewer''s head. There''s also Rick''s
nihilistic outlook, which is deftly woven into his characterisation - his personal
philosophy draws heavily from Narodnaya Volya literature, for instance. The fans
understand this stuff; they have the intellectual capacity to truly appreciate
the depths of these jokes, to realise that they''re not just funny - they say something
deep about LIFE. As a consequence people who dislike Rick & Morty truly ARE idiots -
of course they wouldn''t appreciate, for instance, the humour in Rick''s existential
catchphrase ''Wubba Lubba Dub Dub,'' which itself is a cryptic reference to Turgenev''s
Russian epic Fathers and Sons. I''m smirking right now just imagining one of those
addlepated simpletons scratching their heads in confusion as Dan Harmon''s genius
wit unfolds itself on their television screens. What fools.. how I pity them.
😂
And yes, by the way, i DO have a Rick & Morty tattoo. And no, you cannot see it.
It''s for the ladies'' eyes only - and even then they have to demonstrate that
they''re within 5 IQ points of my own (preferably lower) beforehand. Nothin personnel
kid 😎
✨ 主要特性
- 適用場景廣泛:可用於多種類型文本的摘要生成,如地震相關文本、科學論文、轉錄音頻講座、博客介紹等。
- 處理長序列能力:能夠處理較長的文本序列,在長文檔摘要任務中表現出色。
- 優化計算成本:採用特定的注意力機制(如BigBird的塊稀疏注意力),降低計算成本。
📦 安裝指南
文檔未提供相關安裝步驟,故跳過。
💻 使用示例
文檔未提供代碼示例,故跳過。
📚 詳細文檔
模型參數
屬性 | 詳情 |
---|---|
最大長度 | 64 |
最小長度 | 8 |
無重複n元語法大小 | 3 |
提前停止 | 開啟 |
重複懲罰 | 3.5 |
長度懲罰 | 0.3 |
編碼器無重複n元語法大小 | 3 |
束搜索數量 | 4 |
模型評估結果
samsum數據集
指標類型 | 指標名稱 | 數值 |
---|---|---|
rouge | ROUGE - 1 | 30.0032 |
rouge | ROUGE - 2 | 7.2671 |
rouge | ROUGE - L | 21.8779 |
rouge | ROUGE - LSUM | 26.4371 |
loss | loss | 2.6383285522460938 |
gen_len | gen_len | 54.2357 |
launch/gov_report數據集
指標類型 | 指標名稱 | 數值 |
---|---|---|
rouge | ROUGE - 1 | 37.0538 |
rouge | ROUGE - 2 | 8.1512 |
rouge | ROUGE - L | 17.6645 |
rouge | ROUGE - LSUM | 33.4275 |
loss | loss | 2.6052205562591553 |
gen_len | gen_len | 201.5951 |
kmfoda/booksum數據集
指標類型 | 指標名稱 | 數值 |
---|---|---|
rouge | ROUGE - 1 | 36.1423 |
rouge | ROUGE - 2 | 5.634 |
rouge | ROUGE - L | 16.3747 |
rouge | ROUGE - LSUM | 33.0665 |
loss | loss | 2.454127550125122 |
gen_len | gen_len | 239.4179 |
🔧 技術細節
文檔未提供相關技術細節,故跳過。
📄 許可證
本模型使用的許可證如下:
- Apache - 2.0
- BSD - 3 - Clause
Bart Large Cnn
MIT
基於英語語料預訓練的BART模型,專門針對CNN每日郵報數據集進行微調,適用於文本摘要任務
文本生成 英語
B
facebook
3.8M
1,364
Parrot Paraphraser On T5
Parrot是一個基於T5的釋義框架,專為加速訓練自然語言理解(NLU)模型而設計,通過生成高質量釋義實現數據增強。
文本生成
Transformers

P
prithivida
910.07k
152
Distilbart Cnn 12 6
Apache-2.0
DistilBART是BART模型的蒸餾版本,專門針對文本摘要任務進行了優化,在保持較高性能的同時顯著提升了推理速度。
文本生成 英語
D
sshleifer
783.96k
278
T5 Base Summarization Claim Extractor
基於T5架構的模型,專門用於從摘要文本中提取原子聲明,是摘要事實性評估流程的關鍵組件。
文本生成
Transformers 英語

T
Babelscape
666.36k
9
Unieval Sum
UniEval是一個統一的多維評估器,用於自然語言生成任務的自動評估,支持多個可解釋維度的評估。
文本生成
Transformers

U
MingZhong
318.08k
3
Pegasus Paraphrase
Apache-2.0
基於PEGASUS架構微調的文本複述模型,能夠生成語義相同但表達不同的句子。
文本生成
Transformers 英語

P
tuner007
209.03k
185
T5 Base Korean Summarization
這是一個基於T5架構的韓語文本摘要模型,專為韓語文本摘要任務設計,通過微調paust/pko-t5-base模型在多個韓語數據集上訓練而成。
文本生成
Transformers 韓語

T
eenzeenee
148.32k
25
Pegasus Xsum
PEGASUS是一種基於Transformer的預訓練模型,專門用於抽象文本摘要任務。
文本生成 英語
P
google
144.72k
198
Bart Large Cnn Samsum
MIT
基於BART-large架構的對話摘要模型,專為SAMSum語料庫微調,適用於生成對話摘要。
文本生成
Transformers 英語

B
philschmid
141.28k
258
Kobart Summarization
MIT
基於KoBART架構的韓語文本摘要模型,能夠生成韓語新聞文章的簡潔摘要。
文本生成
Transformers 韓語

K
gogamza
119.18k
12
精選推薦AI模型
Llama 3 Typhoon V1.5x 8b Instruct
專為泰語設計的80億參數指令模型,性能媲美GPT-3.5-turbo,優化了應用場景、檢索增強生成、受限生成和推理任務
大型語言模型
Transformers 支持多種語言

L
scb10x
3,269
16
Cadet Tiny
Openrail
Cadet-Tiny是一個基於SODA數據集訓練的超小型對話模型,專為邊緣設備推理設計,體積僅為Cosmo-3B模型的2%左右。
對話系統
Transformers 英語

C
ToddGoldfarb
2,691
6
Roberta Base Chinese Extractive Qa
基於RoBERTa架構的中文抽取式問答模型,適用於從給定文本中提取答案的任務。
問答系統 中文
R
uer
2,694
98