🚀 ARBERTv2
ARBERTv2は、我々のACL 2021論文「ARBERT & MARBERT: Deep Bidirectional Transformers for Arabic」で説明されたARBERTモデルの更新バージョンです。ARBERTv2は、我々の論文「ORCA: A Challenging Benchmark for Arabic Language Understanding」で紹介されています。ARBERTv2は、現代標準アラビア語(MSA)のデータ、具体的には243GBのテキストと278億トークンで学習されています。
プロパティ |
詳細 |
言語 |
アラビア語 |
タグ |
Arabic BERT、MSA、Twitter、Masked Langauge Model |
ウィジェット |
入力テキスト:"اللغة [MASK] هي لغة العرب" |
🚀 クイックスタート
ARBERTv2は、ACL 2021論文で説明されたARBERTモデルの更新版です。このモデルは、現代標準アラビア語(MSA)の大量のデータで学習されており、アラビア語の言語理解タスクに有用です。
📚 ドキュメント
BibTex
科学出版物で我々のモデル(ARBERTv2)を使用する場合、またはこのリポジトリのリソースが有用であると感じた場合は、以下のように我々の論文を引用してください(更新予定)。
@inproceedings{abdul-mageed-etal-2021-arbert,
title = "{ARBERT} {\&} {MARBERT}: Deep Bidirectional Transformers for {A}rabic",
author = "Abdul-Mageed, Muhammad and
Elmadany, AbdelRahim and
Nagoudi, El Moatez Billah",
booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.acl-long.551",
doi = "10.18653/v1/2021.acl-long.551",
pages = "7088--7105",
abstract = "Pre-trained language models (LMs) are currently integral to many natural language processing systems. Although multilingual LMs were also introduced to serve many languages, these have limitations such as being costly at inference time and the size and diversity of non-English data involved in their pre-training. We remedy these issues for a collection of diverse Arabic varieties by introducing two powerful deep bidirectional transformer-based models, ARBERT and MARBERT. To evaluate our models, we also introduce ARLUE, a new benchmark for multi-dialectal Arabic language understanding evaluation. ARLUE is built using 42 datasets targeting six different task clusters, allowing us to offer a series of standardized experiments under rich conditions. When fine-tuned on ARLUE, our models collectively achieve new state-of-the-art results across the majority of tasks (37 out of 48 classification tasks, on the 42 datasets). Our best model acquires the highest ARLUE score (77.40) across all six task clusters, outperforming all other models including XLM-R Large ( 3.4x larger size). Our models are publicly available at https://github.com/UBC-NLP/marbert and ARLUE will be released through the same repository.",
}
@article{elmadany2022orca,
title={ORCA: A Challenging Benchmark for Arabic Language Understanding},
author={Elmadany, AbdelRahim and Nagoudi, El Moatez Billah and Abdul-Mageed, Muhammad},
journal={arXiv preprint arXiv:2212.10758},
year={2022}
}
謝辞
我々は、カナダ自然科学・工学研究評議会、カナダ社会科学・人文科学研究評議会、カナダイノベーション財団、ComputeCanada、およびUBC ARC - Sockeyeからの支援に感謝しています。また、Google TensorFlow Research Cloud (TFRC)プログラムが無料のTPUアクセスを提供してくれたことにも感謝します。