🚀 CAMeLBERT-CA 情感分析模型
CAMeLBERT-CA 情感分析模型是一個情感分析(SA)模型,它通過對 CAMeLBERT 古典阿拉伯語(CA) 模型進行微調而構建。在微調過程中,我們使用了 ASTD、ArSAS 和 SemEval 數據集。我們的微調過程和使用的超參數可以在我們的論文 "阿拉伯語預訓練語言模型中變體、規模和任務類型的相互作用" 中找到。我們的微調代碼可以在 這裡 找到。
🚀 快速開始
你可以直接將 CAMeLBERT - CA 情感分析模型作為我們 CAMeL 工具 情感分析組件的一部分使用(推薦),也可以作為 transformers 管道的一部分使用。
✨ 主要特性
- 基於微調的 CAMeLBERT 古典阿拉伯語模型,適用於阿拉伯語情感分析任務。
- 使用多個公開數據集進行微調,保證了模型的有效性和泛化能力。
📦 安裝指南
要下載我們的模型,你需要 transformers>=3.5.0
。否則,你可以手動下載模型。
💻 使用示例
基礎用法
使用 CAMeL 工具 情感分析組件:
>>> from camel_tools.sentiment import SentimentAnalyzer
>>> sa = SentimentAnalyzer("CAMeL-Lab/bert-base-arabic-camelbert-ca-sentiment")
>>> sentences = ['أنا بخير', 'أنا لست بخير']
>>> sa.predict(sentences)
>>> ['positive', 'negative']
高級用法
直接使用 transformers 管道:
>>> from transformers import pipeline
>>> sa = pipeline('text-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-ca-sentiment')
>>> sentences = ['أنا بخير', 'أنا لست بخير']
>>> sa(sentences)
[{'label': 'positive', 'score': 0.9616648554801941},
{'label': 'negative', 'score': 0.9779177904129028}]
📚 詳細文檔
本模型的詳細微調過程和超參數設置可參考我們的論文:"阿拉伯語預訓練語言模型中變體、規模和任務類型的相互作用"。
📄 許可證
本項目採用 Apache-2.0 許可證。
📚 引用
如果你使用了本模型,請引用以下論文:
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}