🚀 Fine-Tashkeel: Finetuning Byte-Level Models for Accurate Arabic Text Diacritization
This project focuses on finetuning byte - level models to achieve accurate diacritization of Arabic text, leveraging pre - trained models to enhance performance with minimal training and no feature engineering.
🚀 Quick Start
Basic Usage
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
import pandas as pd
if __name__ == "__main__":
text = "كيف الحال"
model_name = "basharalrfooh/Fine-Tashkeel"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
input_ids = tokenizer(text, return_tensors="pt").input_ids
outputs = model.generate(input_ids, max_new_tokens=128)
decoded_output = tokenizer.decode(outputs[0], skip_special_tokens=True, clean_up_tokenization_spaces=False)
print("Generated output:", decoded_output)
✨ Features
- Leverage pre - trained multilingual models (ByT5) for Arabic text diacritization.
- Achieve state - of - the - art results on the diacritization task with minimal training and no feature engineering, reducing WER by 40%.
- Release finetuned models for the research community.
📦 Installation
No specific installation steps are provided in the original document.
💻 Usage Examples
Basic Usage
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
import pandas as pd
if __name__ == "__main__":
text = "كيف الحال"
model_name = "basharalrfooh/Fine-Tashkeel"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
input_ids = tokenizer(text, return_tensors="pt").input_ids
outputs = model.generate(input_ids, max_new_tokens=128)
decoded_output = tokenizer.decode(outputs[0], skip_special_tokens=True, clean_up_tokenization_spaces=False)
print("Generated output:", decoded_output)
📚 Documentation
Introduction
Most of previous work on learning diacritization of the Arabic language relied on training models from scratch. In this paper, we investigate how to leverage pre - trained language models to learn diacritization. We finetune token - free pre - trained multilingual models (ByT5) to learn to predict and insert missing diacritics in Arabic text, a complex task that requires understanding the sentence semantics and the morphological structure of the tokens. We show that we can achieve state - of - the - art on the diacritization task with minimal amount of training and no feature engineering, reducing WER by 40%. We release our finetuned models for the greater benefit of the researchers in the community.
Model Description
The ByT5 model, distinguished by its innovative token - free architecture, directly processes raw text to adeptly navigate diverse languages and linguistic nuances. Pre - trained on a comprehensive text corpus mc4, ByT5 excels in understanding and generating text, making it versatile for various NLP tasks. We have further enhanced its capabilities by fine - tuning it on a Tashkeela data set for 13,000 steps, significantly refining its performance in restoring the diacritical marks for Arabic.
Benchmarks
⚠️ Important Note
This model has been trained specifically for use with Classical Arabic.
Our model attained a Diacritics Error Rate (DER) of 0.95 and a Word Error Rate (WER) of 2.49.
Information Table
Property |
Details |
Model Type |
Fine - tuned ByT5 for Arabic text diacritization |
Training Data |
Tashkeela dataset |
Metrics |
Diacritics Error Rate (DER): 0.95, Word Error Rate (WER): 2.49 |
Language |
Arabic (specifically Classical Arabic) |
Pipeline Tag |
text2text - generation |
🔧 Technical Details
The ByT5 model, with its token - free architecture, is pre - trained on the mc4 text corpus. By fine - tuning it on the Tashkeela dataset for 13,000 steps, we enhance its ability to understand the semantics and morphological structure of Arabic text, thus improving its performance in restoring diacritical marks.
📄 License
This project is licensed under the MIT license.
📚 Citation
@misc{alrfooh2023finetashkeel,
title={Fine-Tashkeel: Finetuning Byte-Level Models for Accurate Arabic Text Diacritization},
author={Bashar Al-Rfooh and Gheith Abandah and Rami Al-Rfou},
year={2023},
eprint={2303.14588},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
📞 Contact
bashar@alrfou.com