模型简介
模型特点
模型能力
使用案例
🚀 ALBERT XXLarge v1
ALBERT XXLarge v1是一个基于英语语料库,采用掩码语言模型(MLM)目标进行预训练的模型。它能够学习英语语言的内在表示,为下游任务提取有用特征,在自然语言处理领域有广泛应用。
🚀 快速开始
ALBERT XXLarge v1模型可直接用于掩码语言建模任务,也可针对下游任务进行微调。你可以在模型中心(model hub)查找针对特定任务微调后的版本。
✨ 主要特性
- 自监督学习:在大规模英语语料库上进行自监督预训练,通过掩码语言建模(MLM)和句子顺序预测(SOP)两个目标学习语言的内在表示。
- 参数共享:在Transformer架构中共享层权重,减少了内存占用,但计算成本与具有相同隐藏层数量的BERT架构相近。
- 多任务适应性:可用于多种下游任务,如序列分类、标记分类、问答系统等。
📦 安装指南
文档未提及安装步骤,你可参考Hugging Face的相关文档进行安装:
pip install transformers
💻 使用示例
基础用法
使用管道进行掩码语言建模:
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='albert-xxlarge-v1')
>>> unmasker("Hello I'm a [MASK] model.")
[
{
"sequence":"[CLS] hello i'm a modeling model.[SEP]",
"score":0.05816134437918663,
"token":12807,
"token_str":"â–modeling"
},
{
"sequence":"[CLS] hello i'm a modelling model.[SEP]",
"score":0.03748830780386925,
"token":23089,
"token_str":"â–modelling"
},
{
"sequence":"[CLS] hello i'm a model model.[SEP]",
"score":0.033725276589393616,
"token":1061,
"token_str":"â–model"
},
{
"sequence":"[CLS] hello i'm a runway model.[SEP]",
"score":0.017313428223133087,
"token":8014,
"token_str":"â–runway"
},
{
"sequence":"[CLS] hello i'm a lingerie model.[SEP]",
"score":0.014405295252799988,
"token":29104,
"token_str":"â–lingerie"
}
]
高级用法
获取给定文本的特征(PyTorch):
from transformers import AlbertTokenizer, AlbertModel
tokenizer = AlbertTokenizer.from_pretrained('albert-xxlarge-v1')
model = AlbertModel.from_pretrained("albert-xxlarge-v1")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
获取给定文本的特征(TensorFlow):
from transformers import AlbertTokenizer, TFAlbertModel
tokenizer = AlbertTokenizer.from_pretrained('albert-xxlarge-v1')
model = TFAlbertModel.from_pretrained("albert-xxlarge-v1")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
📚 详细文档
模型描述
ALBERT是一个基于Transformer架构的预训练模型,通过自监督学习在大规模英语语料库上进行训练。它主要通过两个目标进行预训练:
- 掩码语言建模(MLM):随机掩盖输入句子中15%的单词,让模型预测这些被掩盖的单词,从而学习句子的双向表示。
- 句子顺序预测(SOP):基于预测两个连续文本片段的顺序来进行预训练。
ALBERT的特殊之处在于其在Transformer架构中共享层权重,所有层具有相同的权重。这种设计减少了内存占用,但计算成本与具有相同隐藏层数量的BERT架构相近。
此为xxlarge模型的第一个版本,版本2由于不同的丢弃率、额外的训练数据和更长的训练时间,在几乎所有下游任务中表现更好。
该模型的配置如下:
属性 | 详情 |
---|---|
模型类型 | ALBERT XXLarge v1 |
层数 | 12个重复层 |
嵌入维度 | 128 |
隐藏维度 | 4096 |
注意力头数 | 64 |
参数数量 | 2.23亿 |
预期用途与限制
该模型可直接用于掩码语言建模或下一句预测任务,但主要用于下游任务的微调。需要注意的是,该模型主要适用于基于整个句子(可能被掩码)进行决策的任务,如序列分类、标记分类或问答系统。对于文本生成任务,建议使用GPT2等模型。
局限性和偏差
尽管用于训练该模型的数据较为中立,但模型的预测仍可能存在偏差。这种偏差也会影响该模型的所有微调版本。例如:
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='albert-xxlarge-v1')
>>> unmasker("The man worked as a [MASK].")
[
{
"sequence":"[CLS] the man worked as a chauffeur.[SEP]",
"score":0.029577180743217468,
"token":28744,
"token_str":"â–chauffeur"
},
{
"sequence":"[CLS] the man worked as a janitor.[SEP]",
"score":0.028865724802017212,
"token":29477,
"token_str":"â–janitor"
},
{
"sequence":"[CLS] the man worked as a shoemaker.[SEP]",
"score":0.02581118606030941,
"token":29024,
"token_str":"â–shoemaker"
},
{
"sequence":"[CLS] the man worked as a blacksmith.[SEP]",
"score":0.01849772222340107,
"token":21238,
"token_str":"â–blacksmith"
},
{
"sequence":"[CLS] the man worked as a lawyer.[SEP]",
"score":0.01820771023631096,
"token":3672,
"token_str":"â–lawyer"
}
]
>>> unmasker("The woman worked as a [MASK].")
[
{
"sequence":"[CLS] the woman worked as a receptionist.[SEP]",
"score":0.04604868218302727,
"token":25331,
"token_str":"â–receptionist"
},
{
"sequence":"[CLS] the woman worked as a janitor.[SEP]",
"score":0.028220869600772858,
"token":29477,
"token_str":"â–janitor"
},
{
"sequence":"[CLS] the woman worked as a paramedic.[SEP]",
"score":0.0261906236410141,
"token":23386,
"token_str":"â–paramedic"
},
{
"sequence":"[CLS] the woman worked as a chauffeur.[SEP]",
"score":0.024797942489385605,
"token":28744,
"token_str":"â–chauffeur"
},
{
"sequence":"[CLS] the woman worked as a waitress.[SEP]",
"score":0.024124596267938614,
"token":13678,
"token_str":"â–waitress"
}
]
训练数据
ALBERT模型在BookCorpus(包含11,038本未出版书籍的数据集)和英文维基百科(不包括列表、表格和标题)上进行预训练。
训练过程
预处理
文本先转换为小写,然后使用SentencePiece进行分词,词汇表大小为30,000。模型的输入格式如下:
[CLS] Sentence A [SEP] Sentence B [SEP]
训练
ALBERT的训练过程遵循BERT的设置。每个句子的掩码过程细节如下:
- 15%的标记被掩码。
- 80%的情况下,被掩码的标记被替换为
[MASK]
。 - 10%的情况下,被掩码的标记被随机标记替换。
- 10%的情况下,被掩码的标记保持不变。
评估结果
在下游任务上进行微调后,ALBERT模型取得了以下结果:
平均 | SQuAD1.1 | SQuAD2.0 | MNLI | SST-2 | RACE | |
---|---|---|---|---|---|---|
V2 | ||||||
ALBERT-base | 82.3 | 90.2/83.2 | 82.1/79.3 | 84.6 | 92.9 | 66.8 |
ALBERT-large | 85.7 | 91.8/85.2 | 84.9/81.8 | 86.5 | 94.9 | 75.2 |
ALBERT-xlarge | 87.9 | 92.9/86.4 | 87.9/84.1 | 87.9 | 95.4 | 80.7 |
ALBERT-xxlarge | 90.9 | 94.6/89.1 | 89.8/86.9 | 90.6 | 96.8 | 86.8 |
V1 | ||||||
ALBERT-base | 80.1 | 89.3/82.3 | 80.0/77.1 | 81.6 | 90.3 | 64.0 |
ALBERT-large | 82.4 | 90.6/83.9 | 82.3/79.4 | 83.5 | 91.7 | 68.5 |
ALBERT-xlarge | 85.5 | 92.5/86.1 | 86.1/83.1 | 86.4 | 92.4 | 74.8 |
ALBERT-xxlarge | 91.0 | 94.8/89.3 | 90.2/87.4 | 90.8 | 96.9 | 86.5 |
BibTeX引用
@article{DBLP:journals/corr/abs-1909-11942,
author = {Zhenzhong Lan and
Mingda Chen and
Sebastian Goodman and
Kevin Gimpel and
Piyush Sharma and
Radu Soricut},
title = {{ALBERT:} {A} Lite {BERT} for Self-supervised Learning of Language
Representations},
journal = {CoRR},
volume = {abs/1909.11942},
year = {2019},
url = {http://arxiv.org/abs/1909.11942},
archivePrefix = {arXiv},
eprint = {1909.11942},
timestamp = {Fri, 27 Sep 2019 13:04:21 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1909-11942.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
📄 许可证
本项目采用Apache 2.0许可证。



