đ my_awesome_mind_model
This model is a fine - tuned version of facebook/wav2vec2-base on the audiofolder dataset. It offers a solution for audio classification tasks, leveraging the pre - trained capabilities of the base model and adapting them to the specific characteristics of the audiofolder dataset.
đ Quick Start
This model is a fine - tuned version of facebook/wav2vec2-base on the audiofolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3338
- Accuracy: 0.5892
đ Documentation
Model Information
Property |
Details |
Model Type |
Fine - tuned version of facebook/wav2vec2 - base |
Training Data |
audiofolder dataset |
Metrics |
Accuracy |
Training Procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e - 05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon = 1e - 08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
Training results
Training Loss |
Epoch |
Step |
Validation Loss |
Accuracy |
2.7071 |
0.95 |
14 |
2.7063 |
0.0602 |
2.7033 |
1.97 |
29 |
2.7006 |
0.0645 |
2.6835 |
2.98 |
44 |
2.6793 |
0.0817 |
2.6551 |
4.0 |
59 |
2.5549 |
0.1699 |
2.5023 |
4.95 |
73 |
2.3970 |
0.2258 |
2.4257 |
5.97 |
88 |
2.3068 |
0.2495 |
2.2542 |
6.98 |
103 |
2.2121 |
0.2688 |
2.2419 |
8.0 |
118 |
2.1736 |
0.2731 |
2.1278 |
8.95 |
132 |
2.1675 |
0.2430 |
2.0592 |
9.97 |
147 |
2.1207 |
0.2796 |
1.9576 |
10.98 |
162 |
2.0662 |
0.2731 |
1.9023 |
12.0 |
177 |
1.9738 |
0.3312 |
1.8367 |
12.95 |
191 |
2.0420 |
0.2903 |
1.7822 |
13.97 |
206 |
2.0161 |
0.2860 |
1.6934 |
14.98 |
221 |
2.0215 |
0.2989 |
1.7093 |
16.0 |
236 |
1.9287 |
0.3290 |
1.6158 |
16.95 |
250 |
1.8138 |
0.3849 |
1.5879 |
17.97 |
265 |
1.8043 |
0.3871 |
1.5249 |
18.98 |
280 |
1.9117 |
0.3548 |
1.4821 |
20.0 |
295 |
1.7242 |
0.4215 |
1.4629 |
20.95 |
309 |
1.6981 |
0.4538 |
1.3847 |
21.97 |
324 |
1.6701 |
0.4516 |
1.3595 |
22.98 |
339 |
1.6891 |
0.4495 |
1.298 |
24.0 |
354 |
1.6321 |
0.4667 |
1.2479 |
24.95 |
368 |
1.5519 |
0.4989 |
1.2135 |
25.97 |
383 |
1.5477 |
0.4839 |
1.1833 |
26.98 |
398 |
1.5437 |
0.5032 |
1.1298 |
28.0 |
413 |
1.5425 |
0.5097 |
1.079 |
28.95 |
427 |
1.5076 |
0.5247 |
1.0709 |
29.97 |
442 |
1.5288 |
0.5140 |
1.0286 |
30.98 |
457 |
1.4497 |
0.5419 |
0.9896 |
32.0 |
472 |
1.4663 |
0.5355 |
0.9707 |
32.95 |
486 |
1.4683 |
0.5333 |
0.9443 |
33.97 |
501 |
1.4977 |
0.5226 |
0.8998 |
34.98 |
516 |
1.4178 |
0.5505 |
0.9048 |
36.0 |
531 |
1.4131 |
0.5462 |
0.8587 |
36.95 |
545 |
1.3791 |
0.5634 |
0.84 |
37.97 |
560 |
1.4036 |
0.5527 |
0.8155 |
38.98 |
575 |
1.4139 |
0.5505 |
0.8086 |
40.0 |
590 |
1.3993 |
0.5462 |
0.808 |
40.95 |
604 |
1.3325 |
0.5914 |
0.7929 |
41.97 |
619 |
1.3500 |
0.5806 |
0.7635 |
42.98 |
634 |
1.3471 |
0.5720 |
0.761 |
44.0 |
649 |
1.3636 |
0.5634 |
0.7456 |
44.95 |
663 |
1.3551 |
0.5828 |
0.75 |
45.97 |
678 |
1.3431 |
0.5849 |
0.7232 |
46.98 |
693 |
1.3338 |
0.5871 |
0.7625 |
47.46 |
700 |
1.3338 |
0.5892 |
Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
đ License
This project is licensed under the Apache - 2.0 license.