🚀 CAMeLBERT MSA命名實體識別模型
CAMeLBERT MSA命名實體識別模型是一個用於識別阿拉伯語命名實體的模型,它通過微調CAMeLBERT現代標準阿拉伯語(MSA)模型構建而成。該模型使用了ANERcorp數據集進行微調,微調過程和使用的超參數可在論文*"阿拉伯語預訓練語言模型中變體、規模和任務類型的相互作用"*中找到,微調代碼可在此處獲取。
🚀 快速開始
你可以直接將CAMeLBERT MSA命名實體識別模型作為CAMeL Tools命名實體識別組件的一部分使用(推薦),也可以將其集成到transformers管道中。
✨ 主要特性
- 基於微調的預訓練模型,在阿拉伯語命名實體識別任務上表現出色。
- 支持多種使用方式,可集成到不同的工具和框架中。
📦 安裝指南
要下載模型,你需要transformers>=3.5.0
。如果版本不滿足要求,你也可以手動下載模型。
💻 使用示例
基礎用法
使用CAMeL Tools命名實體識別組件調用模型:
>>> from camel_tools.ner import NERecognizer
>>> from camel_tools.tokenizers.word import simple_word_tokenize
>>> ner = NERecognizer('CAMeL-Lab/bert-base-arabic-camelbert-msa-ner')
>>> sentence = simple_word_tokenize('إمارة أبوظبي هي إحدى إمارات دولة الإمارات العربية المتحدة السبع')
>>> ner.predict_sentence(sentence)
>>> ['O', 'B-LOC', 'O', 'O', 'O', 'O', 'B-LOC', 'I-LOC', 'I-LOC', 'O']
高級用法
直接使用transformers管道調用命名實體識別模型:
>>> from transformers import pipeline
>>> ner = pipeline('ner', model='CAMeL-Lab/bert-base-arabic-camelbert-msa-ner')
>>> ner("إمارة أبوظبي هي إحدى إمارات دولة الإمارات العربية المتحدة السبع")
[{'word': 'أبوظبي',
'score': 0.9895730018615723,
'entity': 'B-LOC',
'index': 2,
'start': 6,
'end': 12},
{'word': 'الإمارات',
'score': 0.8156259655952454,
'entity': 'B-LOC',
'index': 8,
'start': 33,
'end': 41},
{'word': 'العربية',
'score': 0.890906810760498,
'entity': 'I-LOC',
'index': 9,
'start': 42,
'end': 49},
{'word': 'المتحدة',
'score': 0.8169114589691162,
'entity': 'I-LOC',
'index': 10,
'start': 50,
'end': 57}]
📚 詳細文檔
模型的詳細信息可參考以下內容:
📄 許可證
本項目採用Apache-2.0許可證。
📖 引用
如果你使用了該模型,請引用以下論文:
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
📋 模型信息
屬性 |
詳情 |
模型類型 |
命名實體識別模型 |
訓練數據 |
ANERcorp數據集 |