Model Overview
Model Features
Model Capabilities
Use Cases
🚀 ALBERT Base v2
A pretrained model on English using masked language modeling (MLM), offering efficient language representation learning.
🚀 Quick Start
The ALBERT Base v2 model is a powerful tool for various NLP tasks. You can use it directly with a pipeline for masked language modeling or fine - tune it on downstream tasks.
✨ Features
- Bidirectional Representation: Learns a bidirectional representation of sentences through masked language modeling (MLM).
- Sentence Ordering Prediction: Uses SOP to understand the ordering of text segments.
- Shared Layers: Shares layers across the Transformer, resulting in a small memory footprint.
- Improved Version 2: With different dropout rates, additional training data, and longer training, it performs better on downstream tasks.
📦 Installation
To use this model, you need to install the transformers
library. You can install it via pip:
pip install transformers
💻 Usage Examples
Basic Usage
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='albert-base-v2')
>>> unmasker("Hello I'm a [MASK] model.")
[
{
"sequence":"[CLS] hello i'm a modeling model.[SEP]",
"score":0.05816134437918663,
"token":12807,
"token_str":"▁modeling"
},
{
"sequence":"[CLS] hello i'm a modelling model.[SEP]",
"score":0.03748830780386925,
"token":23089,
"token_str":"▁modelling"
},
{
"sequence":"[CLS] hello i'm a model model.[SEP]",
"score":0.033725276589393616,
"token":1061,
"token_str":"▁model"
},
{
"sequence":"[CLS] hello i'm a runway model.[SEP]",
"score":0.017313428223133087,
"token":8014,
"token_str":"▁runway"
},
{
"sequence":"[CLS] hello i'm a lingerie model.[SEP]",
"score":0.014405295252799988,
"token":29104,
"token_str":"▁lingerie"
}
]
Advanced Usage
Get features in PyTorch
from transformers import AlbertTokenizer, AlbertModel
tokenizer = AlbertTokenizer.from_pretrained('albert-base-v2')
model = AlbertModel.from_pretrained("albert-base-v2")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
Get features in TensorFlow
from transformers import AlbertTokenizer, TFAlbertModel
tokenizer = AlbertTokenizer.from_pretrained('albert-base-v2')
model = TFAlbertModel.from_pretrained("albert-base-v2")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
📚 Documentation
Model description
ALBERT is a transformers model pretrained on a large corpus of English data in a self - supervised fashion. It was pretrained with two objectives: masked language modeling (MLM) and Sentence Ordering Prediction (SOP). This way, it learns an inner representation of the English language for downstream tasks.
This is the second version of the base model. Version 2 has better results in nearly all downstream tasks due to different dropout rates, additional training data, and longer training.
The model has the following configuration:
Property | Details |
---|---|
Repeating Layers | 12 |
Embedding Dimension | 128 |
Hidden Dimension | 768 |
Attention Heads | 12 |
Parameters | 11M |
Intended uses & limitations
You can use the raw model for masked language modeling or next sentence prediction, but it's mostly for fine - tuning on downstream tasks. It's suitable for tasks using the whole sentence, like sequence classification, token classification, or question answering. For text generation, consider models like GPT2.
Limitations and bias
The model can have biased predictions, which will also affect all fine - tuned versions. For example:
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='albert-base-v2')
>>> unmasker("The man worked as a [MASK].")
[
{
"sequence":"[CLS] the man worked as a chauffeur.[SEP]",
"score":0.029577180743217468,
"token":28744,
"token_str":"▁chauffeur"
},
{
"sequence":"[CLS] the man worked as a janitor.[SEP]",
"score":0.028865724802017212,
"token":29477,
"token_str":"▁janitor"
},
{
"sequence":"[CLS] the man worked as a shoemaker.[SEP]",
"score":0.02581118606030941,
"token":29024,
"token_str":"▁shoemaker"
},
{
"sequence":"[CLS] the man worked as a blacksmith.[SEP]",
"score":0.01849772222340107,
"token":21238,
"token_str":"▁blacksmith"
},
{
"sequence":"[CLS] the man worked as a lawyer.[SEP]",
"score":0.01820771023631096,
"token":3672,
"token_str":"▁lawyer"
}
]
>>> unmasker("The woman worked as a [MASK].")
[
{
"sequence":"[CLS] the woman worked as a receptionist.[SEP]",
"score":0.04604868218302727,
"token":25331,
"token_str":"▁receptionist"
},
{
"sequence":"[CLS] the woman worked as a janitor.[SEP]",
"score":0.028220869600772858,
"token":29477,
"token_str":"▁janitor"
},
{
"sequence":"[CLS] the woman worked as a paramedic.[SEP]",
"score":0.0261906236410141,
"token":23386,
"token_str":"▁paramedic"
},
{
"sequence":"[CLS] the woman worked as a chauffeur.[SEP]",
"score":0.024797942489385605,
"token":28744,
"token_str":"▁chauffeur"
},
{
"sequence":"[CLS] the woman worked as a waitress.[SEP]",
"score":0.024124596267938614,
"token":13678,
"token_str":"▁waitress"
}
]
Training data
The ALBERT model was pretrained on BookCorpus, a dataset of 11,038 unpublished books, and English Wikipedia (excluding lists, tables, and headers).
Training procedure
Preprocessing
The texts are lowercased and tokenized using SentencePiece with a vocabulary size of 30,000. The model inputs are in the form:
[CLS] Sentence A [SEP] Sentence B [SEP]
Training
The ALBERT procedure follows the BERT setup. The masking details for each sentence are:
- 15% of the tokens are masked.
- 80% of masked tokens are replaced by
[MASK]
. - 10% of masked tokens are replaced by a random token.
- 10% of masked tokens are left unchanged.
Evaluation results
When fine - tuned on downstream tasks, the ALBERT models achieve the following results:
Average | SQuAD1.1 | SQuAD2.0 | MNLI | SST - 2 | RACE | |
---|---|---|---|---|---|---|
V2 | ||||||
ALBERT - base | 82.3 | 90.2/83.2 | 82.1/79.3 | 84.6 | 92.9 | 66.8 |
ALBERT - large | 85.7 | 91.8/85.2 | 84.9/81.8 | 86.5 | 94.9 | 75.2 |
ALBERT - xlarge | 87.9 | 92.9/86.4 | 87.9/84.1 | 87.9 | 95.4 | 80.7 |
ALBERT - xxlarge | 90.9 | 94.6/89.1 | 89.8/86.9 | 90.6 | 96.8 | 86.8 |
V1 | ||||||
ALBERT - base | 80.1 | 89.3/82.3 | 80.0/77.1 | 81.6 | 90.3 | 64.0 |
ALBERT - large | 82.4 | 90.6/83.9 | 82.3/79.4 | 83.5 | 91.7 | 68.5 |
ALBERT - xlarge | 85.5 | 92.5/86.1 | 86.1/83.1 | 86.4 | 92.4 | 74.8 |
ALBERT - xxlarge | 91.0 | 94.8/89.3 | 90.2/87.4 | 90.8 | 96.9 | 86.5 |
BibTeX entry and citation info
@article{DBLP:journals/corr/abs-1909-11942,
author = {Zhenzhong Lan and
Mingda Chen and
Sebastian Goodman and
Kevin Gimpel and
Piyush Sharma and
Radu Soricut},
title = {{ALBERT:} {A} Lite {BERT} for Self-supervised Learning of Language
Representations},
journal = {CoRR},
volume = {abs/1909.11942},
year = {2019},
url = {http://arxiv.org/abs/1909.11942},
archivePrefix = {arXiv},
eprint = {1909.11942},
timestamp = {Fri, 27 Sep 2019 13:04:21 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1909-11942.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
📄 License
This model is released under the Apache 2.0 license.

