Model Overview
Model Features
Model Capabilities
Use Cases
🚀 ALBERT XLarge v2
A pretrained model on the English language using a masked language modeling (MLM) objective. It can extract useful features for downstream tasks.
🚀 Quick Start
This is a pretrained model on the English language. It can be used directly for masked language modeling or next - sentence prediction, and is mainly intended for fine - tuning on downstream tasks. You can find fine - tuned versions on the model hub.
✨ Features
- Bidirectional Representation Learning: Through Masked Language Modeling (MLM), the model can learn a bidirectional representation of sentences, which is different from traditional RNNs or autoregressive models.
- Sentence Ordering Prediction: ALBERT uses a pretraining loss based on predicting the ordering of two consecutive segments of text.
- Layer Sharing: ALBERT shares its layers across its Transformer, resulting in a small memory footprint.
- Improved Version 2: Version 2 has better performance in nearly all downstream tasks due to different dropout rates, additional training data, and longer training.
📦 Installation
To use this model, you need to install the transformers
library. You can install it using pip
:
pip install transformers
💻 Usage Examples
Basic Usage
You can use this model directly with a pipeline for masked language modeling:
>>> from transformers import pipeline
>>> unmasker = pipeline('fill - mask', model='albert - xlarge - v2')
>>> unmasker("Hello I'm a [MASK] model.")
[
{
"sequence":"[CLS] hello i'm a modeling model.[SEP]",
"score":0.05816134437918663,
"token":12807,
"token_str":"â–modeling"
},
{
"sequence":"[CLS] hello i'm a modelling model.[SEP]",
"score":0.03748830780386925,
"token":23089,
"token_str":"â–modelling"
},
{
"sequence":"[CLS] hello i'm a model model.[SEP]",
"score":0.033725276589393616,
"token":1061,
"token_str":"â–model"
},
{
"sequence":"[CLS] hello i'm a runway model.[SEP]",
"score":0.017313428223133087,
"token":8014,
"token_str":"â–runway"
},
{
"sequence":"[CLS] hello i'm a lingerie model.[SEP]",
"score":0.014405295252799988,
"token":29104,
"token_str":"â–lingerie"
}
]
Advanced Usage
Here is how to use this model to get the features of a given text in PyTorch:
from transformers import AlbertTokenizer, AlbertModel
tokenizer = AlbertTokenizer.from_pretrained('albert - xlarge - v2')
model = AlbertModel.from_pretrained("albert - xlarge - v2")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
And in TensorFlow:
from transformers import AlbertTokenizer, TFAlbertModel
tokenizer = AlbertTokenizer.from_pretrained('albert - xlarge - v2')
model = TFAlbertModel.from_pretrained("albert - xlarge - v2")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
📚 Documentation
Model Description
ALBERT is a transformers model pretrained on a large corpus of English data in a self - supervised fashion. It was pretrained with two objectives: Masked Language Modeling (MLM) and Sentence Ordering Prediction (SOP).
This is the second version of the xlarge model. Version 2 has better results in nearly all downstream tasks due to different dropout rates, additional training data, and longer training.
This model has the following configuration:
Property | Details |
---|---|
Model Type | ALBERT - xlarge v2 |
Repeating Layers | 24 |
Embedding Dimension | 128 |
Hidden Dimension | 2048 |
Attention Heads | 16 |
Parameters | 58M |
Intended Uses & Limitations
This model is primarily aimed at being fine - tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification, or question answering. For text generation tasks, you should look at models like GPT2.
⚠️ Important Note
This model can have biased predictions, and this bias will also affect all fine - tuned versions of this model.
>>> from transformers import pipeline
>>> unmasker = pipeline('fill - mask', model='albert - xlarge - v2')
>>> unmasker("The man worked as a [MASK].")
[
{
"sequence":"[CLS] the man worked as a chauffeur.[SEP]",
"score":0.029577180743217468,
"token":28744,
"token_str":"â–chauffeur"
},
{
"sequence":"[CLS] the man worked as a janitor.[SEP]",
"score":0.028865724802017212,
"token":29477,
"token_str":"â–janitor"
},
{
"sequence":"[CLS] the man worked as a shoemaker.[SEP]",
"score":0.02581118606030941,
"token":29024,
"token_str":"â–shoemaker"
},
{
"sequence":"[CLS] the man worked as a blacksmith.[SEP]",
"score":0.01849772222340107,
"token":21238,
"token_str":"â–blacksmith"
},
{
"sequence":"[CLS] the man worked as a lawyer.[SEP]",
"score":0.01820771023631096,
"token":3672,
"token_str":"â–lawyer"
}
]
>>> unmasker("The woman worked as a [MASK].")
[
{
"sequence":"[CLS] the woman worked as a receptionist.[SEP]",
"score":0.04604868218302727,
"token":25331,
"token_str":"â–receptionist"
},
{
"sequence":"[CLS] the woman worked as a janitor.[SEP]",
"score":0.028220869600772858,
"token":29477,
"token_str":"â–janitor"
},
{
"sequence":"[CLS] the woman worked as a paramedic.[SEP]",
"score":0.0261906236410141,
"token":23386,
"token_str":"â–paramedic"
},
{
"sequence":"[CLS] the woman worked as a chauffeur.[SEP]",
"score":0.024797942489385605,
"token":28744,
"token_str":"â–chauffeur"
},
{
"sequence":"[CLS] the woman worked as a waitress.[SEP]",
"score":0.024124596267938614,
"token":13678,
"token_str":"â–waitress"
}
]
Training Data
The ALBERT model was pretrained on BookCorpus, a dataset consisting of 11,038 unpublished books and English Wikipedia (excluding lists, tables, and headers).
Training Procedure
Preprocessing
The texts are lowercased and tokenized using SentencePiece with a vocabulary size of 30,000. The inputs of the model are then of the form:
[CLS] Sentence A [SEP] Sentence B [SEP]
Training
The ALBERT procedure follows the BERT setup. The details of the masking procedure for each sentence are as follows:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by
[MASK]
. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
Evaluation Results
When fine - tuned on downstream tasks, the ALBERT models achieve the following results:
Average | SQuAD1.1 | SQuAD2.0 | MNLI | SST - 2 | RACE | |
---|---|---|---|---|---|---|
V2 | ||||||
ALBERT - base | 82.3 | 90.2/83.2 | 82.1/79.3 | 84.6 | 92.9 | 66.8 |
ALBERT - large | 85.7 | 91.8/85.2 | 84.9/81.8 | 86.5 | 94.9 | 75.2 |
ALBERT - xlarge | 87.9 | 92.9/86.4 | 87.9/84.1 | 87.9 | 95.4 | 80.7 |
ALBERT - xxlarge | 90.9 | 94.6/89.1 | 89.8/86.9 | 90.6 | 96.8 | 86.8 |
V1 | ||||||
ALBERT - base | 80.1 | 89.3/82.3 | 80.0/77.1 | 81.6 | 90.3 | 64.0 |
ALBERT - large | 82.4 | 90.6/83.9 | 82.3/79.4 | 83.5 | 91.7 | 68.5 |
ALBERT - xlarge | 85.5 | 92.5/86.1 | 86.1/83.1 | 86.4 | 92.4 | 74.8 |
ALBERT - xxlarge | 91.0 | 94.8/89.3 | 90.2/87.4 | 90.8 | 96.9 | 86.5 |
🔧 Technical Details
ALBERT is particular in that it shares its layers across its Transformer, so all layers have the same weights. Using repeating layers results in a small memory footprint, but the computational cost remains similar to a BERT - like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers.
📄 License
This model is licensed under the [Apache - 2.0](https://www.apache.org/licenses/LICENSE - 2.0) license.
BibTeX entry and citation info
@article{DBLP:journals/corr/abs-1909-11942,
author = {Zhenzhong Lan and
Mingda Chen and
Sebastian Goodman and
Kevin Gimpel and
Piyush Sharma and
Radu Soricut},
title = {{ALBERT:} {A} Lite {BERT} for Self - supervised Learning of Language
Representations},
journal = {CoRR},
volume = {abs/1909.11942},
year = {2019},
url = {http://arxiv.org/abs/1909.11942},
archivePrefix = {arXiv},
eprint = {1909.11942},
timestamp = {Fri, 27 Sep 2019 13:04:21 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1909-11942.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}

