Model Overview
Model Features
Model Capabilities
Use Cases
🚀 ALBERT XLarge v1
A pretrained model on English using masked language modeling (MLM). It can extract language features for downstream tasks.
🚀 Quick Start
This is a pretrained model on English using a masked language modeling (MLM) objective. It was introduced in this paper and first released in this repository. Similar to all ALBERT models, this one is uncased, meaning it doesn't distinguish between "english" and "English".
Disclaimer: The team releasing ALBERT didn't write a model card for this model, so this card was written by the Hugging Face team.
✨ Features
Model Description
ALBERT is a transformers model pretrained on a large English corpus in a self - supervised manner. It was trained on raw texts without human labeling, using an automatic process to generate inputs and labels. Specifically, it was pretrained with two objectives:
- Masked language modeling (MLM): The model randomly masks 15% of the words in a sentence, then processes the masked sentence and predicts the masked words. This allows the model to learn a bidirectional sentence representation, different from traditional RNNs or autoregressive models like GPT.
- Sentence Ordering Prediction (SOP): ALBERT uses a pretraining loss based on predicting the order of two consecutive text segments.
ALBERT shares its layers across the Transformer, so all layers have the same weights. Repeating layers result in a small memory footprint, but the computational cost is similar to a BERT - like architecture with the same number of hidden layers.
This is the first version of the xlarge model. Version 2 differs from version 1 due to different dropout rates, additional training data, and longer training, and it performs better on most downstream tasks.
This model has the following configuration:
- 24 repeating layers
- 128 embedding dimension
- 2048 hidden dimension
- 16 attention heads
- 58M parameters
Intended Uses & Limitations
You can use the raw model for masked language modeling or next sentence prediction, but it's mainly for fine - tuning on downstream tasks. Check the model hub for fine - tuned versions.
Note that this model is mainly for tasks using the whole sentence (possibly masked) for decision - making, like sequence classification, token classification, or question answering. For text generation, consider models like GPT2.
📦 Installation
No specific installation steps are provided in the original document.
💻 Usage Examples
Basic Usage
You can use this model directly with a pipeline for masked language modeling:
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='albert-xlarge-v1')
>>> unmasker("Hello I'm a [MASK] model.")
[
{
"sequence":"[CLS] hello i'm a modeling model.[SEP]",
"score":0.05816134437918663,
"token":12807,
"token_str":"â–modeling"
},
{
"sequence":"[CLS] hello i'm a modelling model.[SEP]",
"score":0.03748830780386925,
"token":23089,
"token_str":"â–modelling"
},
{
"sequence":"[CLS] hello i'm a model model.[SEP]",
"score":0.033725276589393616,
"token":1061,
"token_str":"â–model"
},
{
"sequence":"[CLS] hello i'm a runway model.[SEP]",
"score":0.017313428223133087,
"token":8014,
"token_str":"â–runway"
},
{
"sequence":"[CLS] hello i'm a lingerie model.[SEP]",
"score":0.014405295252799988,
"token":29104,
"token_str":"â–lingerie"
}
]
Advanced Usage
Get the features of a given text in PyTorch:
from transformers import AlbertTokenizer, AlbertModel
tokenizer = AlbertTokenizer.from_pretrained('albert-xlarge-v1')
model = AlbertModel.from_pretrained("albert-xlarge-v1")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
In TensorFlow:
from transformers import AlbertTokenizer, TFAlbertModel
tokenizer = AlbertTokenizer.from_pretrained('albert-xlarge-v1')
model = TFAlbertModel.from_pretrained("albert-xlarge-v1")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
Limitations and Bias
Even if the training data is fairly neutral, this model can have biased predictions:
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='albert-xlarge-v1')
>>> unmasker("The man worked as a [MASK].")
[
{
"sequence":"[CLS] the man worked as a chauffeur.[SEP]",
"score":0.029577180743217468,
"token":28744,
"token_str":"â–chauffeur"
},
{
"sequence":"[CLS] the man worked as a janitor.[SEP]",
"score":0.028865724802017212,
"token":29477,
"token_str":"â–janitor"
},
{
"sequence":"[CLS] the man worked as a shoemaker.[SEP]",
"score":0.02581118606030941,
"token":29024,
"token_str":"â–shoemaker"
},
{
"sequence":"[CLS] the man worked as a blacksmith.[SEP]",
"score":0.01849772222340107,
"token":21238,
"token_str":"â–blacksmith"
},
{
"sequence":"[CLS] the man worked as a lawyer.[SEP]",
"score":0.01820771023631096,
"token":3672,
"token_str":"â–lawyer"
}
]
>>> unmasker("The woman worked as a [MASK].")
[
{
"sequence":"[CLS] the woman worked as a receptionist.[SEP]",
"score":0.04604868218302727,
"token":25331,
"token_str":"â–receptionist"
},
{
"sequence":"[CLS] the woman worked as a janitor.[SEP]",
"score":0.028220869600772858,
"token":29477,
"token_str":"â–janitor"
},
{
"sequence":"[CLS] the woman worked as a paramedic.[SEP]",
"score":0.0261906236410141,
"token":23386,
"token_str":"â–paramedic"
},
{
"sequence":"[CLS] the woman worked as a chauffeur.[SEP]",
"score":0.024797942489385605,
"token":28744,
"token_str":"â–chauffeur"
},
{
"sequence":"[CLS] the woman worked as a waitress.[SEP]",
"score":0.024124596267938614,
"token":13678,
"token_str":"â–waitress"
}
]
This bias will also affect all fine - tuned versions of this model.
📚 Documentation
Training Data
The ALBERT model was pretrained on BookCorpus, a dataset of 11,038 unpublished books, and English Wikipedia (excluding lists, tables, and headers).
Training Procedure
Preprocessing
The texts are lowercased and tokenized using SentencePiece with a vocabulary size of 30,000. The model inputs are in the form:
[CLS] Sentence A [SEP] Sentence B [SEP]
Training
The ALBERT procedure follows the BERT setup.
The masking details for each sentence are:
- 15% of the tokens are masked.
- In 80% of cases, masked tokens are replaced by
[MASK]
. - In 10% of cases, masked tokens are replaced by a random token.
- In the remaining 10% of cases, masked tokens are left unchanged.
Evaluation Results
When fine - tuned on downstream tasks, the ALBERT models achieve the following results:
Average | SQuAD1.1 | SQuAD2.0 | MNLI | SST - 2 | RACE | |
---|---|---|---|---|---|---|
V2 | ||||||
ALBERT - base | 82.3 | 90.2/83.2 | 82.1/79.3 | 84.6 | 92.9 | 66.8 |
ALBERT - large | 85.7 | 91.8/85.2 | 84.9/81.8 | 86.5 | 94.9 | 75.2 |
ALBERT - xlarge | 87.9 | 92.9/86.4 | 87.9/84.1 | 87.9 | 95.4 | 80.7 |
ALBERT - xxlarge | 90.9 | 94.6/89.1 | 89.8/86.9 | 90.6 | 96.8 | 86.8 |
V1 | ||||||
ALBERT - base | 80.1 | 89.3/82.3 | 80.0/77.1 | 81.6 | 90.3 | 64.0 |
ALBERT - large | 82.4 | 90.6/83.9 | 82.3/79.4 | 83.5 | 91.7 | 68.5 |
ALBERT - xlarge | 85.5 | 92.5/86.1 | 86.1/83.1 | 86.4 | 92.4 | 74.8 |
ALBERT - xxlarge | 91.0 | 94.8/89.3 | 90.2/87.4 | 90.8 | 96.9 | 86.5 |
BibTeX entry and citation info
@article{DBLP:journals/corr/abs-1909-11942,
author = {Zhenzhong Lan and
Mingda Chen and
Sebastian Goodman and
Kevin Gimpel and
Piyush Sharma and
Radu Soricut},
title = {{ALBERT:} {A} Lite {BERT} for Self-supervised Learning of Language
Representations},
journal = {CoRR},
volume = {abs/1909.11942},
year = {2019},
url = {http://arxiv.org/abs/1909.11942},
archivePrefix = {arXiv},
eprint = {1909.11942},
timestamp = {Fri, 27 Sep 2019 13:04:21 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1909-11942.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
📄 License
This model is released under the Apache 2.0 license.

