๐ Model Card for Model ID
This model is a fine - tuned version of IndoBertweet - base - uncased for Indonesian sentiment analysis. It classifies sentiment into three categories: negative, positive, and neutral. Trained on a diverse dataset from Twitter and other social media, covering politics, disasters, and education, it uses Optuna for hyperparameter tuning and is evaluated with accuracy, F1 - score, precision, and recall metrics.
โจ Features
- Sentiment Classification: Classifies text into negative, positive, or neutral sentiment.
- Diverse Training Data: Utilizes data from various social media platforms and topics.
- Hyperparameter Tuning: Optimized using Optuna for better performance.
๐ฆ Installation
No installation steps were provided in the original document, so this section is skipped.
๐ป Usage Examples
Example 1
- Text: You guys probably don't know this Indonesian band. But don't take it lightly. Because they boldly sing about how activists are stabbed, poisoned, electrocuted, and killed in the air. People who sacrificed their lives so that you can enjoy today while tweeting without worry.
- Output:
- Negative: 0.2964
- Neutral: 0.067
- Positive: 0.6969
Example 2
- Text: As long as there are groups wanting to be messiahs, the government has a justification to make many rules, which creates loopholes for corruption and power abuse. Justice is deregulation.
- Output:
- Negative: 0.971
- Neutral: 0.0165
- Positive: 0.126
Example 3
- Text: When your supporters are okay ๐น go โ๐ฝ okay ๐น go โ๐ฝ but you just laugh ๐คฃ that's so disrespectful ๐ . Don't forget to have lunch ๐ it's free, dude ๐๐นโ๐ฝ
- Output:
- Negative: 0.6457
- Neutral: 0.048
- Positive: 0.3063
Example 4
- Text: Please share work - from - home/freelance job info for students. I really want some extra pocket money in the dorm.
- Output:
- Negative: 0.0544
- Neutral: 0.6973
- Positive: 0.2482
Example 5
- Text: It's so hard to find a job these days. Even the president's kids need their dad to find jobs for them.
- Output:
- Negative: 0.9852
- Neutral: 0.0116
- Positive: 0.0032
Example 6
- Text: The Indonesian Broadcasting Commission (KPI) asks television broadcasts to present a positive and educational image of the police accurately. This was stated by the head of the Central KPI, Ubaidillah, in a panel discussion.
- Output:
- Neutral: 0.9932
- Positive: 0.0063
- Negative: 0.0005
Example 7
- Text: Don't make joking tweets. Sometimes I read a normal tweet and think 'oh, interesting' then I like/retweet it. Then I go to sleep. The next day, that tweet gets bashed. I start to wonder, am I the one who thinks everything is interesting and everyone could be right... OR... has everyone become so sensitive?
- Output:
- Negative: 0.5531
- Neutral: 0.4426
- Positive: 0.0043
๐ง Technical Details
Model Description
This model is a fine - tuned version of IndoBertweet - base - uncased for Indonesian sentiment analysis. The model is designed to classify sentiment into three categories: negative, positive, and neutral. It has been trained on a diverse dataset comprising reactions from Twitter and other social media platforms, covering various topics, including politics, disasters, and education. The model is optimized using Optuna for hyperparameter tuning and evaluated using accuracy, F1 - score, precision, and recall metrics.
Bias and Limitations
Do consider that this model is trained using certain data, which may cause bias in the sentiment classification process. The model may inherit socio - cultural biases from its training data and may be less accurate for the most recent events that are not covered in the data. The limitation of the three categories may also not fully grasp the complexity of emotions, especially in capturing particular contexts. Therefore, it is important to consider and account for such biases when using this model.
Evaluation Results
The training process uses hyperparameter optimization techniques with Optuna. The model was trained for a maximum of 10 epochs with a batch size of 16, using an optimized learning rate and weight decay. The evaluation strategy is performed every 100 steps, saving the best model based on accuracy. The training also applied early stopping with patience 3 to prevent overfitting.
Epoch |
Training Loss |
Validation Loss |
Accuracy |
F1 |
Precision |
Recall |
100 |
1.052800 |
0.995017 |
0.482368 |
0.348356 |
0.580544 |
0.482368 |
200 |
0.893700 |
0.807756 |
0.730479 |
0.703134 |
0.756189 |
0.730479 |
300 |
0.583400 |
0.476157 |
0.850126 |
0.847161 |
0.849467 |
0.850126 |
400 |
0.413600 |
0.385942 |
0.867758 |
0.867614 |
0.870417 |
0.867758 |
500 |
0.345700 |
0.362191 |
0.885390 |
0.883918 |
0.886880 |
0.885390 |
600 |
0.245400 |
0.330090 |
0.897985 |
0.897466 |
0.897541 |
0.897985 |
700 |
0.485000 |
0.308807 |
0.899244 |
0.898736 |
0.898761 |
0.899244 |
800 |
0.363700 |
0.328786 |
0.896725 |
0.895167 |
0.898695 |
0.896725 |
900 |
0.369800 |
0.329429 |
0.892947 |
0.893138 |
0.898281 |
0.892947 |
1000 |
0.273300 |
0.305412 |
0.910579 |
0.910355 |
0.910519 |
0.910579 |
1100 |
0.272800 |
0.388976 |
0.891688 |
0.893113 |
0.896606 |
0.891688 |
1200 |
0.259900 |
0.305771 |
0.913098 |
0.913123 |
0.913669 |
0.913098 |
1300 |
0.293500 |
0.317654 |
0.908060 |
0.908654 |
0.909939 |
0.908060 |
1400 |
0.255200 |
0.331161 |
0.915617 |
0.915708 |
0.916149 |
0.915617 |
1500 |
0.139800 |
0.352545 |
0.909320 |
0.909768 |
0.911014 |
0.909320 |
1600 |
0.194400 |
0.372482 |
0.904282 |
0.904296 |
0.906285 |
0.904282 |
1700 |
0.134200 |
0.340576 |
0.906801 |
0.907110 |
0.907780 |
0.906801 |
๐ License
The model uses the MIT license.
๐ Documentation
No additional detailed documentation was provided in the original document, so this section is skipped.
๐ Citation
@misc{Ardiyanto_Mikhael_2024,
author = {Mikhael Ardiyanto},
title = {Aardiiiiy/indobertweet-base-Indonesian-sentiment-analysis},
year = {2024},
URL = {https://huggingface.co/Aardiiiiy/indobertweet-base-Indonesian-sentiment-analysis},
publisher = {Hugging Face}
}