🚀 CAMeLBERT-MSA DID MADAR Twitter-5 模型
該模型是一個方言識別(DID)模型,通過微調 CAMeLBERT-MSA 模型構建而成。它使用了特定數據集進行微調,能有效完成方言識別任務,在相關領域有重要應用價值。
🚀 快速開始
你可以將 CAMeLBERT-MSA DID MADAR Twitter-5 模型作為 transformers 管道的一部分使用。並且,該模型很快也會在 CAMeL Tools 中可用。
✨ 主要特性
📦 安裝指南
要下載該模型,你需要 transformers>=3.5.0
。若不滿足此條件,也可以手動下載模型。
💻 使用示例
基礎用法
>>> from transformers import pipeline
>>> did = pipeline('text-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-msa-did-madar-twitter5')
>>> sentences = ['عامل ايه ؟', 'شلونك ؟ شخبارك ؟']
>>> did(sentences)
[{'label': 'Egypt', 'score': 0.5741344094276428},
{'label': 'Kuwait', 'score': 0.5225679278373718}]
注意事項
⚠️ 重要提示
要下載我們的模型,你需要 transformers>=3.5.0
,否則你可以手動下載模型。
📚 詳細文檔
模型描述
CAMeLBERT-MSA DID MADAR Twitter - 5 模型 是一個方言識別(DID)模型,它是通過微調 CAMeLBERT-MSA 模型構建的。在微調時,使用了 MADAR Twitter - 5 數據集,該數據集包含 21 個標籤。微調過程和使用的超參數可在論文 "The Interplay of Variant, Size, and Task Type in Arabic Pre - trained Language Models" 中找到,微調代碼可在 此處 獲取。
預期用途
你可以將該模型作為 transformers 管道的一部分使用,並且該模型很快也會在 CAMeL Tools 中可用。
📄 許可證
本項目採用 Apache - 2.0 許可證。
📚 引用
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}