🚀 CAMeLBERT-DA情感分析模型
CAMeLBERT-DA情感分析模型 是一個通過微調 CAMeLBERT方言阿拉伯語(DA) 模型而構建的情感分析(SA)模型。該模型在微調過程中使用了 ASTD、ArSAS 和 SemEval 數據集。我們的微調過程和所使用的超參數可以在論文 "阿拉伯語預訓練語言模型中變體、規模和任務類型的相互作用" 中找到。微調代碼可以在 此處 獲取。
🚀 快速開始
你可以直接將CAMeLBERT-DA情感分析模型作為 駱駝工具(CAMeL Tools) 情感分析組件的一部分使用(推薦),也可以將其作為transformers管道的一部分使用。
💻 使用示例
基礎用法
使用 駱駝工具(CAMeL Tools) 情感分析組件調用模型:
>>> from camel_tools.sentiment import SentimentAnalyzer
>>> sa = SentimentAnalyzer("CAMeL-Lab/bert-base-arabic-camelbert-da-sentiment")
>>> sentences = ['أنا بخير', 'أنا لست بخير']
>>> sa.predict(sentences)
>>> ['positive', 'negative']
高級用法
直接使用transformers管道調用情感分析模型:
>>> from transformers import pipeline
>>> sa = pipeline('text-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-da-sentiment')
>>> sentences = ['أنا بخير', 'أنا لست بخير']
>>> sa(sentences)
[{'label': 'positive', 'score': 0.9616648554801941},
{'label': 'negative', 'score': 0.9779177904129028}]
⚠️ 重要提示
若要下載我們的模型,你需要 transformers>=3.5.0
。否則,你可以手動下載模型。
📚 詳細文檔
預期用途
你可以將CAMeLBERT-DA情感分析模型直接作為我們 駱駝工具(CAMeL Tools) 情感分析組件的一部分使用(推薦),也可以將其作為transformers管道的一部分使用。
📄 許可證
本項目採用Apache-2.0許可證。
📖 引用
如果你在研究中使用了該模型,請使用以下BibTeX引用:
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}