๐ distilbert-base-uncased-indonesia-squadv2
This model is a fine - tuned version of distilbert-base-uncased on an unknown dataset, aiming to solve question - answering tasks.
๐ Quick Start
This model is a fine - tuned version of distilbert-base-uncased on an unknown dataset. It achieves a loss of 1.9144 on the evaluation set.
โจ Features
This model is fine - tuned for question - answering tasks. You can use it to answer various types of questions given a context.
๐ฆ Installation
No specific installation steps are provided in the original document.
๐ป Usage Examples
Basic Usage
from transformers import pipeline
qa_pipeline = pipeline(
"question - answering",
model="asaduas/distilbert-base-uncased-indonesia-squadv2",
tokenizer="asaduas/distilbert-base-uncased-indonesia-squadv2"
)
qa_pipeline(
{
'context': "Pada tahun 1512 juga Afonso de Albuquerque mengirim Antonio Albreu dan Franscisco Serrao untuk memimpin armadanya mencari jalan ke tempat asal rempah-rempah di Maluku. Sepanjang perjalanan, mereka singgah di Madura, Bali, dan Lombok. Dengan menggunakan nakhoda-nakhoda Jawa, armada itu tiba di Kepulauan Banda, terus menuju Aibku Utara sampai tiba di Ternate.",
'question': "Siapa yang dikirim oleh Afonso de Albuquerque Pada tahun 1512?"
}
)
Output:
[
{'score': 0.8919295072555542,
'start': 51,
'end': 88,
'answer': ' Antonio Albreu dan Franscisco Serrao'}
]
๐ Documentation
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e - 05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e - 08
- lr_scheduler_type: linear
- num_epochs: 3
Training results
Training Loss |
Epoch |
Step |
Validation Loss |
2.1676 |
1.0 |
14833 |
2.0658 |
1.865 |
2.0 |
29666 |
1.9552 |
1.6669 |
3.0 |
44499 |
1.9144 |
Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
Evaluations result
{'exact': 42.29721064443732,
'f1': 54.120071422699546,
'total': 24952,
'HasAns_exact': 42.29721064443732,
'HasAns_f1': 54.120071422699546,
'HasAns_total': 24952,
'best_exact': 42.29721064443732,
'best_exact_thresh': 0.0,
'best_f1': 54.120071422699546,
'best_f1_thresh': 0.0}
๐ License
This model is licensed under the Apache - 2.0 license.