🚀 CAMeLBERT-Mix DID Madar Corpus26 Model
CAMeLBERT-Mix DID Madar Corpus26 Model is a dialect identification (DID) model. It addresses the challenge of accurately identifying Arabic dialects, providing valuable insights for NLP applications in the Arabic language.
🚀 Quick Start
You can use the CAMeLBERT-Mix DID Madar Corpus26 model as part of the transformers pipeline. This model will also be available in CAMeL Tools soon.
✨ Features
📦 Installation
To use the model, you need transformers>=3.5.0
to download the models. Otherwise, you could download the models manually.
💻 Usage Examples
Basic Usage
>>> from transformers import pipeline
>>> did = pipeline('text-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-mix-did-madar26')
>>> sentences = ['عامل ايه ؟', 'شلونك ؟ شخبارك ؟']
>>> did(sentences)
[{'label': 'CAI', 'score': 0.8751305937767029},
{'label': 'DOH', 'score': 0.9867215156555176}]
📚 Documentation
Model description
CAMeLBERT-Mix DID Madar Corpus26 Model is a dialect identification (DID) model that was built by fine - tuning the CAMeLBERT-Mix model.
For the fine - tuning, we used the MADAR Corpus 26 dataset, which includes 26 labels.
Our fine - tuning procedure and the hyperparameters we used can be found in our paper "The Interplay of Variant, Size, and Task Type in Arabic Pre - trained Language Models." Our fine - tuning code can be found here.
Intended uses
You can use the CAMeLBERT-Mix DID Madar Corpus26 model as part of the transformers pipeline.
This model will also be available in CAMeL Tools soon.
How to use
To use the model with a transformers pipeline:
>>> from transformers import pipeline
>>> did = pipeline('text-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-mix-did-madar26')
>>> sentences = ['عامل ايه ؟', 'شلونك ؟ شخبارك ؟']
>>> did(sentences)
[{'label': 'CAI', 'score': 0.8751305937767029},
{'label': 'DOH', 'score': 0.9867215156555176}]
Note: to download our models, you would need transformers>=3.5.0
.
Otherwise, you could download the models manually.
📄 License
This model is licensed under the Apache-2.0 license.
📄 Citation
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}