๐ CAMeLBERT-Mix Poetry Classification Model
CAMeLBERT-Mix Poetry Classification Model is a poetry classification model that fine - tunes the CAMeLBERT Mix to classify Arabic poetry. It uses the APCD dataset for fine - tuning.
๐ Quick Start
You can use the CAMeLBERT - Mix Poetry Classification model as part of the transformers pipeline. This model will also be available in CAMeL Tools soon.
โจ Features
๐ฆ Installation
To download our models, you would need transformers>=3.5.0
. Otherwise, you could download the models manually.
๐ป Usage Examples
Basic Usage
To use the model with a transformers pipeline:
>>> from transformers import pipeline
>>> poetry = pipeline('text-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-mix-poetry')
>>>
>>> verses = [
['ุงูุฎูู ูุงูููู ูุงูุจูุฏุงุก ุชุนุฑููู' ,'ูุงูุณูู ูุงูุฑู
ุญ ูุงููุฑุทุงุณ ูุงูููู
'],
['ูู
ููู
ุนูู
ููู ุงูุชุจุฌููุง' ,'ูุงุฏ ุงูู
ุนูู
ุงู ูููู ุฑุณููุง']
]
>>>
>>> join_verse = lambda half: ' [SEP] '.join(half)
>>>
>>> verses = [join_verse(verse) for verse in verses]
>>> poetry(sentences)
[{'label': 'ุงูุจุณูุท', 'score': 0.9937475919723511},
{'label': 'ุงููุงู
ู', 'score': 0.971284031867981}]
๐ Documentation
Model description
CAMeLBERT - Mix Poetry Classification Model is a poetry classification model that was built by fine - tuning the CAMeLBERT Mix model. For the fine - tuning, we used the APCD dataset. Our fine - tuning procedure and the hyperparameters we used can be found in our paper "The Interplay of Variant, Size, and Task Type in Arabic Pre - trained Language Models." Our fine - tuning code can be found here.
Intended uses
You can use the CAMeLBERT - Mix Poetry Classification model as part of the transformers pipeline. This model will also be available in CAMeL Tools soon.
๐ License
This model is licensed under the Apache - 2.0 license.
๐ Citation
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}