🚀 ARBERT - 阿拉伯语预训练模型
ARBERT是一个专注于现代标准阿拉伯语(MSA)的大规模预训练掩码语言模型。它能为阿拉伯语相关的自然语言处理任务提供强大的支持,在阿拉伯语的语义理解等方面具有重要价值。
🚀 快速开始
ARBERT 是我们在 ACL 2021 论文 "ARBERT & MARBERT: Deep Bidirectional Transformers for Arabic" 中描述的三个模型之一。ARBERT 采用了与 BERT-base 相同的架构进行训练:12 个注意力层,每个层有 12 个注意力头和 768 个隐藏维度,词汇表包含 100K 个词片,约有 1.63 亿个参数。我们在包含 61GB 文本(62 亿个标记)的阿拉伯语数据集集合上对 ARBERT 进行训练。如需更多信息,请访问我们的 GitHub 仓库。
模型信息
属性 |
详情 |
模型类型 |
基于 Transformer 的深度双向预训练掩码语言模型 |
训练数据 |
包含 61GB 文本(62 亿个标记)的阿拉伯语数据集集合 |
📚 详细文档
BibTex
如果您在科学出版物中使用我们的模型(ARBERT、MARBERT 或 MARBERTv2),或者发现本仓库中的资源有用,请按以下方式引用我们的论文(待更新):
@inproceedings{abdul-mageed-etal-2021-arbert,
title = "{ARBERT} {\&} {MARBERT}: Deep Bidirectional Transformers for {A}rabic",
author = "Abdul-Mageed, Muhammad and
Elmadany, AbdelRahim and
Nagoudi, El Moatez Billah",
booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.acl-long.551",
doi = "10.18653/v1/2021.acl-long.551",
pages = "7088--7105",
abstract = "Pre-trained language models (LMs) are currently integral to many natural language processing systems. Although multilingual LMs were also introduced to serve many languages, these have limitations such as being costly at inference time and the size and diversity of non-English data involved in their pre-training. We remedy these issues for a collection of diverse Arabic varieties by introducing two powerful deep bidirectional transformer-based models, ARBERT and MARBERT. To evaluate our models, we also introduce ARLUE, a new benchmark for multi-dialectal Arabic language understanding evaluation. ARLUE is built using 42 datasets targeting six different task clusters, allowing us to offer a series of standardized experiments under rich conditions. When fine-tuned on ARLUE, our models collectively achieve new state-of-the-art results across the majority of tasks (37 out of 48 classification tasks, on the 42 datasets). Our best model acquires the highest ARLUE score (77.40) across all six task clusters, outperforming all other models including XLM-R Large ( 3.4x larger size). Our models are publicly available at https://github.com/UBC-NLP/marbert and ARLUE will be released through the same repository.",
}
👏 致谢
我们衷心感谢加拿大自然科学与工程研究委员会、加拿大社会科学与人文研究委员会、加拿大创新基金会、ComputeCanada 和 UBC ARC-Sockeye 的支持。我们也感谢 Google TensorFlow Research Cloud (TFRC) 项目为我们提供免费的 TPU 访问权限。