🚀 YOSO
The YOSO model is designed for masked language modeling (MLM) with a sequence length of 4096, offering an efficient solution for long - sequence natural language processing tasks.
🚀 Quick Start
You can quickly start using the YOSO model for masked language tasks as shown in the following example:
>>> from transformers import pipeline
>>> unmasker = pipeline('fill - mask', model='uw - madison/yoso - 4096')
>>> unmasker("Paris is the [MASK] of France.")
[{'score': 0.024274500086903572,
'token': 812,
'token_str': ' capital',
'sequence': 'Paris is the capital of France.'},
{'score': 0.022863076999783516,
'token': 3497,
'token_str': ' Republic',
'sequence': 'Paris is the Republic of France.'},
{'score': 0.01383623294532299,
'token': 1515,
'token_str': ' French',
'sequence': 'Paris is the French of France.'},
{'score': 0.013550693169236183,
'token': 2201,
'token_str': ' Paris',
'sequence': 'Paris is the Paris of France.'},
{'score': 0.011591030284762383,
'token': 270,
'token_str': ' President',
'sequence': 'Paris is the President of France.'}]
✨ Features
- Efficient Self - Attention: The YOSO model uses a Bernoulli sampling attention mechanism based on Locality Sensitive Hashing (LSH), reducing the quadratic complexity of self - attention to linear. This allows for more efficient training on long sequences.
- Good Performance: Evaluated on the GLUE benchmark with a standard 512 sequence length, it shows favorable performance compared to a standard pretrained Transformer. On the Long Range Arena (LRA) benchmark for long - sequence evaluation, it achieves results comparable to softmax self - attention with significant speed - ups and memory savings, often outperforming other efficient self - attention methods.
📚 Documentation
The YOSO model was proposed in You Only Sample (Almost) Once: Linear Cost Self - Attention Via Bernoulli Sampling by Zhanpeng Zeng, Yunyang Xiong, Sathya N. Ravi, Shailesh Acharya, Glenn Fung, Vikas Singh.
The abstract from the paper is as follows:
Transformer - based models are widely used in natural language processing (NLP). Central to the transformer model is the self - attention mechanism, which captures the interactions of token pairs in the input sequences and depends quadratically on the sequence length. Training such models on longer sequences is expensive. In this paper, we show that a Bernoulli sampling attention mechanism based on Locality Sensitive Hashing (LSH), decreases the quadratic complexity of such models to linear. We bypass the quadratic cost by considering self - attention as a sum of individual tokens associated with Bernoulli random variables that can, in principle, be sampled at once by a single hash (although in practice, this number may be a small constant). This leads to an efficient sampling scheme to estimate self - attention which relies on specific modifications of LSH (to enable deployment on GPU architectures). We evaluate our algorithm on the GLUE benchmark with standard 512 sequence length where we see favorable performance relative to a standard pretrained Transformer. On the Long Range Arena (LRA) benchmark, for evaluating performance on long sequences, our method achieves results consistent with softmax self - attention but with sizable speed - ups and memory savings and often outperforms other efficient self - attention methods. Our code is available at this https URL