đ segformer-b0-finetuned-morphpadver1-hgo-coord-v1
This model is a fine - tuned version of nvidia/mit-b1 on the NICOPOI - 9/morphpad_coord_hgo_512_4class dataset. It offers high - performance image segmentation capabilities, achieving excellent results in evaluation metrics.
đ Quick Start
This section provides an overview of the model's evaluation results and training details.
Evaluation Results
This model achieves the following results on the evaluation set:
- Loss: 0.0644
- Mean Iou: 0.9579
- Mean Accuracy: 0.9785
- Overall Accuracy: 0.9785
- Accuracy 0 - 0: 0.9792
- Accuracy 0 - 90: 0.9782
- Accuracy 90 - 0: 0.9762
- Accuracy 90 - 90: 0.9804
- Iou 0 - 0: 0.9634
- Iou 0 - 90: 0.9512
- Iou 90 - 0: 0.9543
- Iou 90 - 90: 0.9627
đ Documentation
Model description
This model is a fine - tuned version of nvidia/mit-b1 on the NICOPOI - 9/morphpad_coord_hgo_512_4class dataset.
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e - 05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon = 1e - 08 and optimizer_args = No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 40
Training results
Training Loss |
Epoch |
Step |
Validation Loss |
Mean Iou |
Mean Accuracy |
Overall Accuracy |
Accuracy 0 - 0 |
Accuracy 0 - 90 |
Accuracy 90 - 0 |
Accuracy 90 - 90 |
Iou 0 - 0 |
Iou 0 - 90 |
Iou 90 - 0 |
Iou 90 - 90 |
1.1962 |
2.5445 |
4000 |
1.2063 |
0.2464 |
0.3994 |
0.4010 |
0.2991 |
0.2803 |
0.5096 |
0.5084 |
0.2367 |
0.2111 |
0.2658 |
0.2721 |
1.051 |
5.0891 |
8000 |
1.0734 |
0.3118 |
0.4765 |
0.4765 |
0.3856 |
0.5520 |
0.3937 |
0.5745 |
0.3123 |
0.3148 |
0.2968 |
0.3233 |
0.9309 |
7.6336 |
12000 |
0.9672 |
0.3612 |
0.5314 |
0.5323 |
0.4806 |
0.5216 |
0.7075 |
0.4158 |
0.3362 |
0.3706 |
0.3778 |
0.3603 |
0.8041 |
10.1781 |
16000 |
0.8444 |
0.4475 |
0.6180 |
0.6178 |
0.6131 |
0.6672 |
0.6120 |
0.5798 |
0.4403 |
0.4360 |
0.4543 |
0.4593 |
0.6617 |
12.7226 |
20000 |
0.7405 |
0.5039 |
0.6697 |
0.6700 |
0.6310 |
0.6588 |
0.6714 |
0.7177 |
0.5097 |
0.4912 |
0.5114 |
0.5033 |
0.54 |
15.2672 |
24000 |
0.6090 |
0.5828 |
0.7360 |
0.7362 |
0.6931 |
0.7532 |
0.7427 |
0.7550 |
0.5911 |
0.5709 |
0.5876 |
0.5819 |
0.7378 |
17.8117 |
28000 |
0.3740 |
0.7401 |
0.8507 |
0.8505 |
0.8789 |
0.8270 |
0.8186 |
0.8783 |
0.7712 |
0.7324 |
0.7203 |
0.7366 |
0.58 |
20.3562 |
32000 |
0.1892 |
0.8644 |
0.9272 |
0.9272 |
0.9329 |
0.9188 |
0.9142 |
0.9430 |
0.8810 |
0.8523 |
0.8539 |
0.8704 |
0.1305 |
22.9008 |
36000 |
0.1473 |
0.8945 |
0.9443 |
0.9443 |
0.9563 |
0.9245 |
0.9421 |
0.9542 |
0.9021 |
0.8783 |
0.8925 |
0.9049 |
0.1775 |
25.4453 |
40000 |
0.1133 |
0.9178 |
0.9571 |
0.9571 |
0.9578 |
0.9536 |
0.9583 |
0.9586 |
0.9264 |
0.9068 |
0.9130 |
0.9249 |
0.4792 |
27.9898 |
44000 |
0.0961 |
0.9306 |
0.9640 |
0.9640 |
0.9662 |
0.9633 |
0.9617 |
0.9650 |
0.9374 |
0.9194 |
0.9268 |
0.9387 |
0.1084 |
30.5344 |
48000 |
0.0886 |
0.9364 |
0.9671 |
0.9672 |
0.9684 |
0.9600 |
0.9689 |
0.9712 |
0.9429 |
0.9257 |
0.9335 |
0.9437 |
0.0471 |
33.0789 |
52000 |
0.0721 |
0.9485 |
0.9735 |
0.9735 |
0.9772 |
0.9674 |
0.9729 |
0.9767 |
0.9528 |
0.9402 |
0.9467 |
0.9542 |
0.0722 |
35.6234 |
56000 |
0.0646 |
0.9554 |
0.9772 |
0.9772 |
0.9809 |
0.9728 |
0.9757 |
0.9794 |
0.9576 |
0.9488 |
0.9522 |
0.9629 |
0.0406 |
38.1679 |
60000 |
0.0644 |
0.9579 |
0.9785 |
0.9785 |
0.9792 |
0.9782 |
0.9762 |
0.9804 |
0.9634 |
0.9512 |
0.9543 |
0.9627 |
Framework versions
- Transformers 4.48.3
- Pytorch 2.1.0
- Datasets 3.2.0
- Tokenizers 0.21.0
đ License
This model is under the "other" license.