đ wav2vec2-base-sound2
This model is a fine - tuned version of facebook/wav2vec2-base on an unknown dataset. It can achieve certain evaluation results, which shows its potential in relevant tasks.
đ Quick Start
This model is a fine - tuned version of facebook/wav2vec2-base on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5012
- Accuracy: 0.5357
đ Documentation
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 9e - 05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon = 1e - 08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 25
Training results
Training Loss |
Epoch |
Step |
Validation Loss |
Accuracy |
No log |
1.0 |
1 |
2.0762 |
0.0714 |
No log |
2.0 |
2 |
2.0638 |
0.1429 |
No log |
3.0 |
3 |
2.0387 |
0.2143 |
No log |
4.0 |
4 |
2.0124 |
0.2143 |
No log |
5.0 |
5 |
1.9864 |
0.2143 |
No log |
6.0 |
6 |
1.9609 |
0.2143 |
No log |
7.0 |
7 |
1.9235 |
0.2143 |
No log |
8.0 |
8 |
1.9379 |
0.2143 |
No log |
9.0 |
9 |
1.8627 |
0.2857 |
1.9713 |
10.0 |
10 |
1.8277 |
0.3214 |
1.9713 |
11.0 |
11 |
1.7765 |
0.3571 |
1.9713 |
12.0 |
12 |
1.7204 |
0.5 |
1.9713 |
13.0 |
13 |
1.6956 |
0.5 |
1.9713 |
14.0 |
14 |
1.6602 |
0.5357 |
1.9713 |
15.0 |
15 |
1.6277 |
0.5714 |
1.9713 |
16.0 |
16 |
1.6053 |
0.5 |
1.9713 |
17.0 |
17 |
1.5825 |
0.5 |
1.9713 |
18.0 |
18 |
1.5656 |
0.4286 |
1.9713 |
19.0 |
19 |
1.5616 |
0.4643 |
1.6334 |
20.0 |
20 |
1.5613 |
0.4286 |
1.6334 |
21.0 |
21 |
1.5419 |
0.5 |
1.6334 |
22.0 |
22 |
1.5166 |
0.5357 |
1.6334 |
23.0 |
23 |
1.5088 |
0.5 |
1.6334 |
24.0 |
24 |
1.5052 |
0.5 |
1.6334 |
25.0 |
25 |
1.5012 |
0.5357 |
Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 1.14.0
- Tokenizers 0.12.1
đ License
This model is licensed under the Apache - 2.0 license.