đ UNet Model for COVID-19 CT Scan Segmentation
This UNet-based model is crafted for the automated segmentation of COVID-19 infected lung regions in CT scans. By integrating attention mechanisms into the classic U-Net, it sharpens the focus on the infected areas, thereby enhancing segmentation accuracy.
đ Quick Start
Load the Model
TensorFlow/Keras
import os
from huggingface_hub import hf_hub_download
from tensorflow.keras.models import load_model
from keras.saving import register_keras_serializable
import tensorflow.keras.backend as K
os.environ["KERAS_BACKEND"] = "jax"
@register_keras_serializable()
def dice_coef(y_true, y_pred, smooth=1e-6):
y_true_f = K.flatten(y_true)
y_pred_f = K.flatten(y_pred)
intersection = K.sum(y_true_f * y_pred_f)
return (2. * intersection + smooth) / (K.sum(y_true_f) + K.sum(y_pred_f) + smooth)
@register_keras_serializable()
def gl_sl(*args, **kwargs):
pass
model_path = hf_hub_download(repo_id="amal90888/unet-segmentation-model", filename="unet_model.keras")
unet = load_model(model_path, custom_objects={"dice_coef": dice_coef, "gl_sl": gl_sl}, compile=False)
from tensorflow.keras.optimizers import Adam
unet.compile(optimizer=Adam(learning_rate=1e-4), loss="binary_crossentropy", metrics=["accuracy", dice_coef])
print("â
Model loaded and recompiled successfully!")
⨠Features
- Automated Segmentation: Specifically designed for the automated segmentation of COVID-19 infected lung regions in CT scans.
- Attention Mechanisms: Integrates attention gates into the classic U-Net architecture to improve focus on infected areas.
đĻ Installation
No specific installation steps are provided in the original document.
đģ Usage Examples
Basic Usage
The basic usage involves loading the model as shown in the "Quick Start" section.
đ Documentation
Model Overview
- Architecture: UNet + Attention Gates
- Dataset: COVID-19 CT scans from Coronacases.org, Radiopaedia.org, and Zenodo Repository
- Task: Image Segmentation (Lung Infection)
- Metrics: Dice Coefficient, IoU, Hausdorff Distance, ASSD
Training Details
- Dataset Size: 20 CT scans (512 Ã 512 Ã 301 slices)
- Preprocessing:
- Normalization of pixel intensities
[0,1]
- HU Thresholding:
[-1000, 1500]
- Image resizing to
128 Ã 128 pixels
- Binarization of masks (0 = background, 1 = infected regions)
- Augmentation:
- Rotations: Âą5 degrees
- Elastic transformations, Gaussian blur
- Brightness/contrast variations
- Final dataset: 2,252 CT slices
- Training:
- Optimizer: Adam (
learning rate = 1e-4
)
- Loss Function: Weighted BCE-Dice Loss + Surface Loss
- Batch Size: 16
- Epochs: 25
- Training Platform: NVIDIA Tesla T4 (Google Colab Pro)
Model Performance
Metric |
Non-Augmented Model |
Augmented Model |
Dice Coefficient |
0.8502 |
0.8658 |
IoU (Mean) |
0.7445 |
0.8316 |
ASSD (Symmetric Distance) |
0.3907 |
0.3888 |
Hausdorff Distance |
8.4853 |
9.8995 |
ROC AUC Score |
0.91 |
1.00 |
Key Findings:
â Augmentation improved segmentation accuracy significantly
â Attention U-Net outperformed other segmentation models
đ§ Technical Details
The model enhances the classic U-Net architecture with attention gates. The training process involves specific preprocessing steps, data augmentation techniques, and the use of a weighted BCE-Dice Loss + Surface Loss function.
đ License
This project is licensed under the MIT license.