đ distilhubert-finetuned-gtzan
This model is a fine - tuned version of [ntu - spml/distilhubert](https://huggingface.co/ntu - spml/distilhubert) on the GTZAN dataset, achieving a loss of 0.4989 and an accuracy of 0.91 on the evaluation set.
đ Quick Start
This model is a fine - tuned version of [ntu - spml/distilhubert](https://huggingface.co/ntu - spml/distilhubert) on the marsyas/gtzan dataset. It achieves the following results on the evaluation set:
- Loss: 0.4989
- Accuracy: 0.91
đ Documentation
Model Information
Property |
Details |
Model Type |
Fine - tuned version of [ntu - spml/distilhubert](https://huggingface.co/ntu - spml/distilhubert) |
Training Data |
marsyas/gtzan |
Metrics |
Accuracy |
License |
Apache - 2.0 |
Training Procedure
Training Hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e - 06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon = 1e - 08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
Training Results
Training Loss |
Epoch |
Step |
Validation Loss |
Accuracy |
0.2359 |
1.0 |
112 |
0.4776 |
0.87 |
0.1235 |
2.0 |
225 |
0.4872 |
0.84 |
0.2083 |
3.0 |
337 |
0.4910 |
0.85 |
0.19 |
4.0 |
450 |
0.4953 |
0.87 |
0.1128 |
5.0 |
562 |
0.4801 |
0.87 |
0.1644 |
6.0 |
675 |
0.4703 |
0.87 |
0.0699 |
7.0 |
787 |
0.4692 |
0.85 |
0.1082 |
8.0 |
900 |
0.4708 |
0.87 |
0.0898 |
9.0 |
1012 |
0.4347 |
0.89 |
0.1071 |
10.0 |
1125 |
0.5310 |
0.85 |
0.0727 |
11.0 |
1237 |
0.4765 |
0.87 |
0.0338 |
12.0 |
1350 |
0.4859 |
0.87 |
0.0233 |
13.0 |
1462 |
0.4713 |
0.87 |
0.0248 |
14.0 |
1575 |
0.5068 |
0.88 |
0.0263 |
15.0 |
1687 |
0.4874 |
0.88 |
0.0185 |
16.0 |
1800 |
0.4925 |
0.88 |
0.0142 |
17.0 |
1912 |
0.4766 |
0.89 |
0.0178 |
18.0 |
2025 |
0.4850 |
0.89 |
0.0153 |
19.0 |
2137 |
0.4660 |
0.88 |
0.012 |
20.0 |
2250 |
0.4831 |
0.88 |
0.0113 |
21.0 |
2362 |
0.4965 |
0.89 |
0.0106 |
22.0 |
2475 |
0.5098 |
0.89 |
0.011 |
23.0 |
2587 |
0.5093 |
0.89 |
0.009 |
24.0 |
2700 |
0.4989 |
0.91 |
0.0094 |
25.0 |
2812 |
0.4999 |
0.89 |
0.0441 |
26.0 |
2925 |
0.5197 |
0.88 |
0.0079 |
27.0 |
3037 |
0.5115 |
0.89 |
0.0072 |
28.0 |
3150 |
0.5136 |
0.88 |
0.007 |
29.0 |
3262 |
0.5394 |
0.88 |
0.0068 |
30.0 |
3375 |
0.5374 |
0.88 |
0.0061 |
31.0 |
3487 |
0.5221 |
0.88 |
0.0533 |
32.0 |
3600 |
0.5775 |
0.87 |
0.0055 |
33.0 |
3712 |
0.5632 |
0.88 |
0.0059 |
34.0 |
3825 |
0.5584 |
0.87 |
0.0051 |
35.0 |
3937 |
0.5444 |
0.88 |
0.0051 |
36.0 |
4050 |
0.5373 |
0.88 |
0.0045 |
37.0 |
4162 |
0.5723 |
0.87 |
0.0058 |
38.0 |
4275 |
0.5773 |
0.87 |
0.0043 |
39.0 |
4387 |
0.5455 |
0.88 |
0.0044 |
40.0 |
4500 |
0.5686 |
0.88 |
0.004 |
41.0 |
4612 |
0.5622 |
0.87 |
0.004 |
42.0 |
4725 |
0.5797 |
0.88 |
0.0042 |
43.0 |
4837 |
0.5621 |
0.88 |
0.0037 |
44.0 |
4950 |
0.5734 |
0.87 |
0.0048 |
45.0 |
5062 |
0.5774 |
0.88 |
0.0039 |
46.0 |
5175 |
0.5901 |
0.87 |
0.0043 |
47.0 |
5287 |
0.5743 |
0.88 |
0.0043 |
48.0 |
5400 |
0.5757 |
0.87 |
0.0037 |
49.0 |
5512 |
0.5710 |
0.88 |
0.0036 |
49.78 |
5600 |
0.5759 |
0.87 |
Framework Versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
đ License
This model is licensed under the Apache - 2.0 license.