đ organoids-prova_organoid
This model is a fine - tuned version of google/vit-base-patch16-224 on the imagefolder dataset. It is designed for image classification tasks and can achieve high accuracy in relevant evaluations.
đ Quick Start
This model is a fine - tuned version of google/vit-base-patch16-224 on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3433
- Accuracy: 0.8576
đ Documentation
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e - 05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon = 1e - 08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 40
Training results
Training Loss |
Epoch |
Step |
Validation Loss |
Accuracy |
1.2121 |
0.99 |
36 |
1.3066 |
0.4116 |
0.8905 |
1.99 |
72 |
0.9344 |
0.6749 |
0.6942 |
2.98 |
108 |
0.6875 |
0.7507 |
0.6087 |
4.0 |
145 |
0.5493 |
0.7896 |
0.5896 |
4.99 |
181 |
0.5028 |
0.7993 |
0.6168 |
5.99 |
217 |
0.4787 |
0.8100 |
0.5627 |
6.98 |
253 |
0.4373 |
0.8319 |
0.5654 |
8.0 |
290 |
0.4324 |
0.8299 |
0.5204 |
8.99 |
326 |
0.4130 |
0.8319 |
0.5581 |
9.99 |
362 |
0.4264 |
0.8241 |
0.5232 |
10.98 |
398 |
0.4074 |
0.8294 |
0.483 |
12.0 |
435 |
0.3850 |
0.8445 |
0.5208 |
12.99 |
471 |
0.3791 |
0.8489 |
0.4937 |
13.99 |
507 |
0.3723 |
0.8528 |
0.4436 |
14.98 |
543 |
0.3910 |
0.8440 |
0.5169 |
16.0 |
580 |
0.3794 |
0.8465 |
0.4394 |
16.99 |
616 |
0.3876 |
0.8440 |
0.4616 |
17.99 |
652 |
0.3844 |
0.8465 |
0.4983 |
18.98 |
688 |
0.3552 |
0.8591 |
0.5295 |
20.0 |
725 |
0.3561 |
0.8547 |
0.5121 |
20.99 |
761 |
0.3573 |
0.8537 |
0.4379 |
21.99 |
797 |
0.3593 |
0.8576 |
0.4653 |
22.98 |
833 |
0.3473 |
0.8601 |
0.486 |
24.0 |
870 |
0.3473 |
0.8610 |
0.4751 |
24.99 |
906 |
0.3638 |
0.8552 |
0.4462 |
25.99 |
942 |
0.3533 |
0.8542 |
0.4197 |
26.98 |
978 |
0.3464 |
0.8601 |
0.4966 |
28.0 |
1015 |
0.3451 |
0.8649 |
0.5004 |
28.99 |
1051 |
0.3634 |
0.8508 |
0.4156 |
29.99 |
1087 |
0.3723 |
0.8474 |
0.4508 |
30.98 |
1123 |
0.3342 |
0.8669 |
0.43 |
32.0 |
1160 |
0.3389 |
0.8639 |
0.5004 |
32.99 |
1196 |
0.3416 |
0.8615 |
0.4927 |
33.99 |
1232 |
0.3545 |
0.8533 |
0.4802 |
34.98 |
1268 |
0.3382 |
0.8610 |
0.4334 |
36.0 |
1305 |
0.3480 |
0.8542 |
0.4557 |
36.99 |
1341 |
0.3392 |
0.8601 |
0.4551 |
37.99 |
1377 |
0.3488 |
0.8542 |
0.4643 |
38.98 |
1413 |
0.3424 |
0.8586 |
0.513 |
39.72 |
1440 |
0.3433 |
0.8576 |
Framework versions
- Transformers 4.28.0
- Pytorch 1.8.1+cu111
- Datasets 2.14.5
- Tokenizers 0.13.3
đ License
This model is licensed under the Apache 2.0 license.
Property |
Details |
Model Type |
Fine - tuned version of google/vit-base-patch16-224 |
Training Data |
imagefolder |
Metrics |
accuracy |
Results |
Accuracy: 0.8576287657920311 on the train split of imagefolder dataset |