Ecc Segformer Main
E
Ecc Segformer Main
Developed by rishitunu
An image segmentation model fine-tuned from nvidia/mit-b5 for crack detection tasks
Downloads 15
Release Time : 8/28/2023
Model Overview
This model is an image segmentation model based on the SegFormer architecture, specifically designed for engineering crack detection. It demonstrates good crack recognition capabilities on evaluation datasets.
Model Features
Crack detection capability
Optimized specifically for engineering crack detection tasks, achieving an Intersection over Union (IoU) of 0.4658 on evaluation datasets
SegFormer architecture
Based on the efficient SegFormer architecture, using mit-b5 as the backbone
Model Capabilities
Image segmentation
Crack detection
Engineering structural health monitoring
Use Cases
Infrastructure inspection
Concrete structure crack detection
Used to detect cracks in concrete structures such as buildings and bridges
Crack detection accuracy of 0.4658
đ ecc_segformer_main
This model is a fine - tuned version of nvidia/mit-b5 on the rishitunu/ecc_crackdetector_dataset_main dataset. It's designed for image segmentation tasks, offering valuable results in crack detection within the evaluation set.
đ Quick Start
This model is a fine - tuned version of nvidia/mit-b5 on the rishitunu/ecc_crackdetector_dataset_main dataset. It achieves the following results on the evaluation set:
- Loss: 0.1918
- Mean Iou: 0.2329
- Mean Accuracy: 0.4658
- Overall Accuracy: 0.4658
- Accuracy Background: nan
- Accuracy Crack: 0.4658
- Iou Background: 0.0
- Iou Crack: 0.4658
⨠Features
- Fine - Tuned Model: Based on nvidia/mit-b5, fine - tuned on a specific dataset for better performance in image segmentation.
- Comprehensive Evaluation Metrics: Provides multiple evaluation metrics such as loss, mean Iou, and various accuracy measures.
đ§ Technical Details
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e - 05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 1337
- optimizer: Adam with betas=(0.9,0.999) and epsilon = 1e - 08
- lr_scheduler_type: polynomial
- training_steps: 10000
Training results
Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Background | Accuracy Crack | Iou Background | Iou Crack |
---|---|---|---|---|---|---|---|---|---|---|
0.1069 | 1.0 | 172 | 0.1376 | 0.1660 | 0.3320 | 0.3320 | nan | 0.3320 | 0.0 | 0.3320 |
0.0682 | 2.0 | 344 | 0.1327 | 0.2298 | 0.4596 | 0.4596 | nan | 0.4596 | 0.0 | 0.4596 |
0.0666 | 3.0 | 516 | 0.2478 | 0.1200 | 0.2401 | 0.2401 | nan | 0.2401 | 0.0 | 0.2401 |
0.0639 | 4.0 | 688 | 0.1732 | 0.1538 | 0.3076 | 0.3076 | nan | 0.3076 | 0.0 | 0.3076 |
0.0624 | 5.0 | 860 | 0.1027 | 0.2334 | 0.4668 | 0.4668 | nan | 0.4668 | 0.0 | 0.4668 |
0.0557 | 6.0 | 1032 | 0.1003 | 0.1851 | 0.3703 | 0.3703 | nan | 0.3703 | 0.0 | 0.3703 |
0.0563 | 7.0 | 1204 | 0.1512 | 0.2007 | 0.4014 | 0.4014 | nan | 0.4014 | 0.0 | 0.4014 |
0.054 | 8.0 | 1376 | 0.1000 | 0.2401 | 0.4802 | 0.4802 | nan | 0.4802 | 0.0 | 0.4802 |
0.0546 | 9.0 | 1548 | 0.0933 | 0.2238 | 0.4475 | 0.4475 | nan | 0.4475 | 0.0 | 0.4475 |
0.0498 | 10.0 | 1720 | 0.0964 | 0.2303 | 0.4606 | 0.4606 | nan | 0.4606 | 0.0 | 0.4606 |
0.0515 | 11.0 | 1892 | 0.1107 | 0.2258 | 0.4516 | 0.4516 | nan | 0.4516 | 0.0 | 0.4516 |
0.0453 | 12.0 | 2064 | 0.0961 | 0.2557 | 0.5115 | 0.5115 | nan | 0.5115 | 0.0 | 0.5115 |
0.0431 | 13.0 | 2236 | 0.1027 | 0.2396 | 0.4792 | 0.4792 | nan | 0.4792 | 0.0 | 0.4792 |
0.0418 | 14.0 | 2408 | 0.1027 | 0.2521 | 0.5042 | 0.5042 | nan | 0.5042 | 0.0 | 0.5042 |
0.0426 | 15.0 | 2580 | 0.1059 | 0.2561 | 0.5123 | 0.5123 | nan | 0.5123 | 0.0 | 0.5123 |
0.0377 | 16.0 | 2752 | 0.1193 | 0.2281 | 0.4561 | 0.4561 | nan | 0.4561 | 0.0 | 0.4561 |
0.0369 | 17.0 | 2924 | 0.1161 | 0.2486 | 0.4972 | 0.4972 | nan | 0.4972 | 0.0 | 0.4972 |
0.036 | 18.0 | 3096 | 0.1058 | 0.2515 | 0.5029 | 0.5029 | nan | 0.5029 | 0.0 | 0.5029 |
0.034 | 19.0 | 3268 | 0.1176 | 0.2434 | 0.4868 | 0.4868 | nan | 0.4868 | 0.0 | 0.4868 |
0.0337 | 20.0 | 3440 | 0.1162 | 0.2254 | 0.4509 | 0.4509 | nan | 0.4509 | 0.0 | 0.4509 |
0.0281 | 21.0 | 3612 | 0.1203 | 0.2213 | 0.4426 | 0.4426 | nan | 0.4426 | 0.0 | 0.4426 |
0.0354 | 22.0 | 3784 | 0.1266 | 0.2384 | 0.4768 | 0.4768 | nan | 0.4768 | 0.0 | 0.4768 |
0.0323 | 23.0 | 3956 | 0.1223 | 0.2409 | 0.4818 | 0.4818 | nan | 0.4818 | 0.0 | 0.4818 |
0.0299 | 24.0 | 4128 | 0.1356 | 0.2195 | 0.4390 | 0.4390 | nan | 0.4390 | 0.0 | 0.4390 |
0.0294 | 25.0 | 4300 | 0.1285 | 0.2318 | 0.4636 | 0.4636 | nan | 0.4636 | 0.0 | 0.4636 |
0.0295 | 26.0 | 4472 | 0.1274 | 0.2559 | 0.5119 | 0.5119 | nan | 0.5119 | 0.0 | 0.5119 |
0.0252 | 27.0 | 4644 | 0.1387 | 0.2413 | 0.4827 | 0.4827 | nan | 0.4827 | 0.0 | 0.4827 |
0.029 | 28.0 | 4816 | 0.1468 | 0.2236 | 0.4472 | 0.4472 | nan | 0.4472 | 0.0 | 0.4472 |
0.0218 | 29.0 | 4988 | 0.1448 | 0.2433 | 0.4866 | 0.4866 | nan | 0.4866 | 0.0 | 0.4866 |
0.0275 | 30.0 | 5160 | 0.1478 | 0.2318 | 0.4635 | 0.4635 | nan | 0.4635 | 0.0 | 0.4635 |
0.0233 | 31.0 | 5332 | 0.1377 | 0.2502 | 0.5005 | 0.5005 | nan | 0.5005 | 0.0 | 0.5005 |
0.0252 | 32.0 | 5504 | 0.1458 | 0.2399 | 0.4797 | 0.4797 | nan | 0.4797 | 0.0 | 0.4797 |
0.0245 | 33.0 | 5676 | 0.1431 | 0.2480 | 0.4960 | 0.4960 | nan | 0.4960 | 0.0 | 0.4960 |
0.0225 | 34.0 | 5848 | 0.1562 | 0.2439 | 0.4879 | 0.4879 | nan | 0.4879 | 0.0 | 0.4879 |
0.0242 | 35.0 | 6020 | 0.1633 | 0.2323 | 0.4646 | 0.4646 | nan | 0.4646 | 0.0 | 0.4646 |
0.0213 | 36.0 | 6192 | 0.1666 | 0.2274 | 0.4549 | 0.4549 | nan | 0.4549 | 0.0 | 0.4549 |
0.0256 | 37.0 | 6364 | 0.1665 | 0.2340 | 0.4680 | 0.4680 | nan | 0.4680 | 0.0 | 0.4680 |
0.0237 | 38.0 | 6536 | 0.1658 | 0.2410 | 0.4819 | 0.4819 | nan | 0.4819 | 0.0 | 0.4819 |
0.0192 | 39.0 | 6708 | 0.1705 | 0.2286 | 0.4572 | 0.4572 | nan | 0.4572 | 0.0 | 0.4572 |
0.0198 | 40.0 | 6880 | 0.1688 | 0.2322 | 0.4644 | 0.4644 | nan | 0.4644 | 0.0 | 0.4644 |
0.0214 | 41.0 | 7052 | 0.1717 | 0.2315 | 0.4630 | 0.4630 | nan | 0.4630 | 0.0 | 0.4630 |
0.0197 | 42.0 | 7224 | 0.1764 | 0.2338 | 0.4677 | 0.4677 | nan | 0.4677 | 0.0 | 0.4677 |
0.0187 | 43.0 | 7396 | 0.1764 | 0.2437 | 0.4874 | 0.4874 | nan | 0.4874 | 0.0 | 0.4874 |
0.0212 | 44.0 | 7568 | 0.1874 | 0.2259 | 0.4519 | 0.4519 | nan | 0.4519 | 0.0 | 0.4519 |
0.0188 | 45.0 | 7740 | 0.1854 | 0.2362 | 0.4725 | 0.4725 | nan | 0.4725 | 0.0 | 0.4725 |
0.0188 | 46.0 | 7912 | 0.1772 | 0.2320 | 0.4641 | 0.4641 | nan | 0.4641 | 0.0 | 0.4641 |
0.0228 | 47.0 | 8084 | 0.1783 | 0.2385 | 0.4770 | 0.4770 | nan | 0.4770 | 0.0 | 0.4770 |
0.0199 | 48.0 | 8256 | 0.1850 | 0.2317 | 0.4634 | 0.4634 | nan | 0.4634 | 0.0 | 0.4634 |
0.0202 | 49.0 | 8428 | 0.1872 | 0.2336 | 0.4672 | 0.4672 | nan | 0.4672 | 0.0 | 0.4672 |
0.0181 | 50.0 | 8600 | 0.1803 | 0.2405 | 0.4810 | 0.4810 | nan | 0.4810 | 0.0 | 0.4810 |
0.0157 | 51.0 | 8772 | 0.1874 | 0.2349 | 0.4697 | 0.4697 | nan | 0.4697 | 0.0 | 0.4697 |
0.0162 | 52.0 | 8944 | 0.1889 | 0.2332 | 0.4665 | 0.4665 | nan | 0.4665 | 0.0 | 0.4665 |
0.0178 | 53.0 | 9116 | 0.1948 | 0.2357 | 0.4715 | 0.4715 | nan | 0.4715 | 0.0 | 0.4715 |
0.0166 | 54.0 | 9288 | 0.1911 | 0.2333 | 0.4666 | 0.4666 | nan | 0.4666 | 0.0 | 0.4666 |
0.0193 | 55.0 | 9460 | 0.1959 | 0.2306 | 0.4611 | 0.4611 | nan | 0.4611 | 0.0 | 0.4611 |
0.0199 | 56.0 | 9632 | 0.1999 | 0.2330 | 0.4659 | 0.4659 | nan | 0.4659 | 0.0 | 0.4659 |
0.0177 | 57.0 | 9804 | 0.1943 | 0.2319 | 0.4639 | 0.4639 | nan | 0.4639 | 0.0 | 0.4639 |
0.019 | 58.0 | 9976 | 0.1926 | 0.2327 | 0.4653 | 0.4653 | nan | 0.4653 | 0.0 | 0.4653 |
0.0187 | 58.14 | 10000 | 0.1918 | 0.2329 | 0.4658 | 0.4658 | nan | 0.4658 | 0.0 | 0.4658 |
Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cpu
- Datasets 2.14.4
- Tokenizers 0.13.3
đ License
other
Clipseg Rd64 Refined
Apache-2.0
CLIPSeg is an image segmentation model based on text and image prompts, supporting zero-shot and one-shot image segmentation tasks.
Image Segmentation
Transformers

C
CIDAS
10.0M
122
RMBG 1.4
Other
BRIA RMBG v1.4 is an advanced background removal model designed for efficiently separating foreground and background in various types of images, suitable for non-commercial use.
Image Segmentation
Transformers

R
briaai
874.12k
1,771
RMBG 2.0
Other
The latest background removal model developed by BRIA AI, capable of effectively separating foreground and background in various images, suitable for large-scale commercial content creation scenarios.
Image Segmentation
Transformers

R
briaai
703.33k
741
Segformer B2 Clothes
MIT
SegFormer model fine-tuned on ATR dataset for clothing and human segmentation
Image Segmentation
Transformers

S
mattmdjaga
666.39k
410
Sam Vit Base
Apache-2.0
SAM is a vision model capable of generating high-quality object masks from input prompts (such as points or boxes), supporting zero-shot segmentation tasks
Image Segmentation
Transformers Other

S
facebook
635.09k
137
Birefnet
MIT
BiRefNet is a deep learning model for high-resolution binary image segmentation, which achieves accurate image segmentation through a bilateral reference network.
Image Segmentation
Transformers

B
ZhengPeng7
626.54k
365
Segformer B1 Finetuned Ade 512 512
Other
SegFormer is a Transformer-based semantic segmentation model fine-tuned on the ADE20K dataset, suitable for image segmentation tasks.
Image Segmentation
Transformers

S
nvidia
560.79k
6
Sam Vit Large
Apache-2.0
SAM is a visual model capable of generating high-quality object masks from input points or bounding boxes, with zero-shot transfer capability.
Image Segmentation
Transformers Other

S
facebook
455.43k
28
Face Parsing
Semantic segmentation model fine-tuned from nvidia/mit-b5 for face parsing tasks
Image Segmentation
Transformers English

F
jonathandinu
398.59k
157
Sam Vit Huge
Apache-2.0
SAM is a vision model capable of generating high-quality object masks based on input prompts, supporting zero-shot transfer to new tasks
Image Segmentation
Transformers Other

S
facebook
324.78k
163
Featured Recommended AI Models
Š 2025AIbase