🚀 ALBERT XXLarge v2
A pretrained model on English using masked language modeling (MLM). It offers bidirectional representation learning and can be fine - tuned for various downstream tasks.
✨ Features
- Bidirectional Learning: Through Masked Language Modeling (MLM), it learns a bidirectional representation of sentences, different from traditional RNNs and autoregressive models.
- Sentence Ordering Prediction: Uses a pretraining loss based on predicting the ordering of two consecutive text segments.
- Shared Layers: Shares layers across its Transformer, resulting in a small memory footprint.
- Improved Version 2: With different dropout rates, additional training data, and longer training, it performs better on downstream tasks.
📦 Installation
No specific installation steps were provided in the original README. However, to use the model in Python, you need to install the transformers
library:
pip install transformers
💻 Usage Examples
Basic Usage
You can use this model directly with a pipeline for masked language modeling:
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='albert-xxlarge-v2')
>>> unmasker("Hello I'm a [MASK] model.")
[
{
"sequence":"[CLS] hello i'm a modeling model.[SEP]",
"score":0.05816134437918663,
"token":12807,
"token_str":"â–modeling"
},
{
"sequence":"[CLS] hello i'm a modelling model.[SEP]",
"score":0.03748830780386925,
"token":23089,
"token_str":"â–modelling"
},
{
"sequence":"[CLS] hello i'm a model model.[SEP]",
"score":0.033725276589393616,
"token":1061,
"token_str":"â–model"
},
{
"sequence":"[CLS] hello i'm a runway model.[SEP]",
"score":0.017313428223133087,
"token":8014,
"token_str":"â–runway"
},
{
"sequence":"[CLS] hello i'm a lingerie model.[SEP]",
"score":0.014405295252799988,
"token":29104,
"token_str":"â–lingerie"
}
]
Advanced Usage
Get features of a given text in PyTorch
from transformers import AlbertTokenizer, AlbertModel
tokenizer = AlbertTokenizer.from_pretrained('albert-xxlarge-v2')
model = AlbertModel.from_pretrained("albert-xxlarge-v2")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
Get features of a given text in TensorFlow
from transformers import AlbertTokenizer, TFAlbertModel
tokenizer = AlbertTokenizer.from_pretrained('albert-xxlarge-v2')
model = TFAlbertModel.from_pretrained("albert-xxlarge-v2")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
📚 Documentation
Model description
ALBERT is a transformers model pretrained on a large corpus of English data in a self - supervised fashion. It was pretrained with two objectives:
- Masked language modeling (MLM): Randomly masks 15% of the words in the input sentence and predicts the masked words, allowing the model to learn a bidirectional representation.
- Sentence Ordering Prediction (SOP): Predicts the ordering of two consecutive segments of text.
ALBERT shares its layers across its Transformer, resulting in a small memory footprint. This is the second version of the xxlarge model, which performs better on nearly all downstream tasks due to different dropout rates, additional training data, and longer training.
This model has the following configuration:
Property |
Details |
Repeating Layers |
12 |
Embedding Dimension |
128 |
Hidden Dimension |
4096 |
Attention Heads |
64 |
Parameters |
223M |
Intended uses & limitations
You can use the raw model for masked language modeling or next sentence prediction, but it's mainly for fine - tuning on downstream tasks. Check the model hub for fine - tuned versions.
Note that this model is for tasks using the whole sentence (potentially masked) for decision - making, like sequence classification, token classification, or question answering. For text generation, consider models like GPT2.
Limitations and bias
Even with fairly neutral training data, this model can have biased predictions. For example:
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='albert-xxlarge-v2')
>>> unmasker("The man worked as a [MASK].")
[
{
"sequence":"[CLS] the man worked as a chauffeur.[SEP]",
"score":0.029577180743217468,
"token":28744,
"token_str":"â–chauffeur"
},
{
"sequence":"[CLS] the man worked as a janitor.[SEP]",
"score":0.028865724802017212,
"token":29477,
"token_str":"â–janitor"
},
{
"sequence":"[CLS] the man worked as a shoemaker.[SEP]",
"score":0.02581118606030941,
"token":29024,
"token_str":"â–shoemaker"
},
{
"sequence":"[CLS] the man worked as a blacksmith.[SEP]",
"score":0.01849772222340107,
"token":21238,
"token_str":"â–blacksmith"
},
{
"sequence":"[CLS] the man worked as a lawyer.[SEP]",
"score":0.01820771023631096,
"token":3672,
"token_str":"â–lawyer"
}
]
>>> unmasker("The woman worked as a [MASK].")
[
{
"sequence":"[CLS] the woman worked as a receptionist.[SEP]",
"score":0.04604868218302727,
"token":25331,
"token_str":"â–receptionist"
},
{
"sequence":"[CLS] the woman worked as a janitor.[SEP]",
"score":0.028220869600772858,
"token":29477,
"token_str":"â–janitor"
},
{
"sequence":"[CLS] the woman worked as a paramedic.[SEP]",
"score":0.0261906236410141,
"token":23386,
"token_str":"â–paramedic"
},
{
"sequence":"[CLS] the woman worked as a chauffeur.[SEP]",
"score":0.024797942489385605,
"token":28744,
"token_str":"â–chauffeur"
},
{
"sequence":"[CLS] the woman worked as a waitress.[SEP]",
"score":0.024124596267938614,
"token":13678,
"token_str":"â–waitress"
}
]
This bias will affect all fine - tuned versions of this model.
Training data
The ALBERT model was pretrained on BookCorpus, a dataset of 11,038 unpublished books, and English Wikipedia (excluding lists, tables, and headers).
Training procedure
Preprocessing
The texts are lowercased and tokenized using SentencePiece with a vocabulary size of 30,000. The model inputs are of the form:
[CLS] Sentence A [SEP] Sentence B [SEP]
Training
The ALBERT procedure follows the BERT setup. The masking details for each sentence are:
- 15% of the tokens are masked.
- 80% of the masked tokens are replaced by
[MASK]
.
- 10% of the masked tokens are replaced by a random token.
- 10% of the masked tokens are left unchanged.
Evaluation results
When fine - tuned on downstream tasks, the ALBERT models achieve the following results:
|
Average |
SQuAD1.1 |
SQuAD2.0 |
MNLI |
SST - 2 |
RACE |
V2 |
|
|
|
|
|
|
ALBERT - base |
82.3 |
90.2/83.2 |
82.1/79.3 |
84.6 |
92.9 |
66.8 |
ALBERT - large |
85.7 |
91.8/85.2 |
84.9/81.8 |
86.5 |
94.9 |
75.2 |
ALBERT - xlarge |
87.9 |
92.9/86.4 |
87.9/84.1 |
87.9 |
95.4 |
80.7 |
ALBERT - xxlarge |
90.9 |
94.6/89.1 |
89.8/86.9 |
90.6 |
96.8 |
86.8 |
V1 |
|
|
|
|
|
|
ALBERT - base |
80.1 |
89.3/82.3 |
80.0/77.1 |
81.6 |
90.3 |
64.0 |
ALBERT - large |
82.4 |
90.6/83.9 |
82.3/79.4 |
83.5 |
91.7 |
68.5 |
ALBERT - xlarge |
85.5 |
92.5/86.1 |
86.1/83.1 |
86.4 |
92.4 |
74.8 |
ALBERT - xxlarge |
91.0 |
94.8/89.3 |
90.2/87.4 |
90.8 |
96.9 |
86.5 |
BibTeX entry and citation info
@article{DBLP:journals/corr/abs-1909-11942,
author = {Zhenzhong Lan and
Mingda Chen and
Sebastian Goodman and
Kevin Gimpel and
Piyush Sharma and
Radu Soricut},
title = {{ALBERT:} {A} Lite {BERT} for Self-supervised Learning of Language
Representations},
journal = {CoRR},
volume = {abs/1909.11942},
year = {2019},
url = {http://arxiv.org/abs/1909.11942},
archivePrefix = {arXiv},
eprint = {1909.11942},
timestamp = {Fri, 27 Sep 2019 13:04:21 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1909-11942.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
📄 License
This model is released under the Apache 2.0 license.