🚀 CAMeLBERT - Mix诗歌分类模型
CAMeLBERT - Mix诗歌分类模型是一个用于诗歌分类的模型,它通过微调CAMeLBERT Mix模型构建而成。该模型能有效对阿拉伯语诗歌进行分类,为相关研究和应用提供了有力支持。
🚀 快速开始
你可以将CAMeLBERT - Mix诗歌分类模型作为transformers管道的一部分来使用。该模型不久后也将在CAMeL Tools中可用。
✨ 主要特性
📦 安装指南
要下载我们的模型,你需要transformers>=3.5.0
。否则,你可以手动下载模型。
💻 使用示例
基础用法
>>> from transformers import pipeline
>>> poetry = pipeline('text-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-mix-poetry')
>>>
>>> verses = [
['الخيل والليل والبيداء تعرفني' ,'والسيف والرمح والقرطاس والقلم'],
['قم للمعلم وفه التبجيلا' ,'كاد المعلم ان يكون رسولا']
]
>>>
>>> join_verse = lambda half: ' [SEP] '.join(half)
>>>
>>> verses = [join_verse(verse) for verse in verses]
>>> poetry(sentences)
[{'label': 'البسيط', 'score': 0.9937475919723511},
{'label': 'الكامل', 'score': 0.971284031867981}]
📚 详细文档
模型描述
CAMeLBERT - Mix诗歌分类模型 是一个诗歌分类模型,它通过微调CAMeLBERT Mix模型构建而成。在微调过程中,我们使用了APCD数据集。我们的微调过程和所使用的超参数可以在我们的论文 "The Interplay of Variant, Size, and Task Type in Arabic Pre - trained Language Models" 中找到。我们的微调代码可以在这里找到。
预期用途
你可以将CAMeLBERT - Mix诗歌分类模型作为transformers管道的一部分来使用。该模型不久后也将在CAMeL Tools中可用。
📄 许可证
本项目采用Apache - 2.0许可证。
🔖 引用
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}