🚀 Sundanese RoBERTa Base
Sundanese RoBERTa Base is a masked language model that addresses the lack of natural language understanding resources for the Sundanese language. It offers a more effective solution for Sundanese language processing tasks by leveraging large - scale Sundanese datasets.
🚀 Quick Start
Sundanese RoBERTa Base is a masked language model based on the RoBERTa model. It was trained on four datasets: OSCAR's unshuffled_deduplicated_su
subset, the Sundanese mC4 subset, the Sundanese CC100 subset, and Sundanese Wikipedia.
10% of the dataset is kept for evaluation purposes. The model was trained from scratch and achieved an evaluation loss of 1.952 and an evaluation accuracy of 63.98%.
This model was trained using HuggingFace's Flax framework. All necessary scripts used for training could be found in the Files and versions tab, as well as the Training metrics logged via Tensorboard.
✨ Features
- Based on RoBERTa: Leveraging the powerful architecture of RoBERTa for effective language understanding.
- Trained on Multiple Datasets: Utilizes diverse Sundanese datasets including OSCAR, mC4, CC100, and Wikipedia.
- High - Quality Training: Trained from scratch with good evaluation results (loss: 1.952, accuracy: 63.98%).
📦 Installation
No specific installation steps are provided in the original README. So, this section is skipped.
💻 Usage Examples
Basic Usage - As Masked Language Model
from transformers import pipeline
pretrained_name = "w11wo/sundanese-roberta-base"
fill_mask = pipeline(
"fill-mask",
model=pretrained_name,
tokenizer=pretrained_name
)
fill_mask("Budi nuju <mask> di sakola.")
Advanced Usage - Feature Extraction in PyTorch
from transformers import RobertaModel, RobertaTokenizerFast
pretrained_name = "w11wo/sundanese-roberta-base"
model = RobertaModel.from_pretrained(pretrained_name)
tokenizer = RobertaTokenizerFast.from_pretrained(pretrained_name)
prompt = "Budi nuju diajar di sakola."
encoded_input = tokenizer(prompt, return_tensors='pt')
output = model(**encoded_input)
📚 Documentation
Model
Property |
Details |
Model Type |
sundanese-roberta-base |
#params |
124M |
Arch. |
RoBERTa |
Training Data |
OSCAR, mC4, CC100, Wikipedia (758 MB) |
Evaluation Results
The model was trained for 50 epochs and the following is the final result once the training ended.
train loss |
valid loss |
valid accuracy |
total time |
1.965 |
1.952 |
0.6398 |
6:24:51 |
🔧 Technical Details
The model is based on the RoBERTa architecture and trained using HuggingFace's Flax framework. It was trained from scratch on a combination of four Sundanese datasets. 10% of the dataset was reserved for evaluation. The training process was logged via Tensorboard, and all necessary training scripts can be found in the Files and versions tab.
📄 License
This project is licensed under the MIT license.
📖 Disclaimer
⚠️ Important Note
Do consider the biases which came from all four datasets that may be carried over into the results of this model.
👨💻 Author
Sundanese RoBERTa Base was trained and evaluated by Wilson Wongso.
📚 Citation Information
@article{rs-907893,
author = {Wongso, Wilson
and Lucky, Henry
and Suhartono, Derwin},
journal = {Journal of Big Data},
year = {2022},
month = {Feb},
day = {26},
abstract = {The Sundanese language has over 32 million speakers worldwide, but the language has reaped little to no benefits from the recent advances in natural language understanding. Like other low-resource languages, the only alternative is to fine-tune existing multilingual models. In this paper, we pre-trained three monolingual Transformer-based language models on Sundanese data. When evaluated on a downstream text classification task, we found that most of our monolingual models outperformed larger multilingual models despite the smaller overall pre-training data. In the subsequent analyses, our models benefited strongly from the Sundanese pre-training corpus size and do not exhibit socially biased behavior. We released our models for other researchers and practitioners to use.},
issn = {2693-5015},
doi = {10.21203/rs.3.rs-907893/v1},
url = {https://doi.org/10.21203/rs.3.rs-907893/v1}
}