đ Transformers for Bone Age Hand Radiograph Cropping
This model is designed to crop hand radiographs, aiming to standardize the image input for bone age models, thus enhancing the accuracy and consistency of bone age assessment.
đ Quick Start
To use the model, follow these steps:
import cv2
import torch
from transformers import AutoModel
model = AutoModel.from_pretrained("ianpan/bone-age-crop", trust_remote_code=True)
model = model.eval()
img = cv2.imread(..., 0)
img_shape = torch.tensor([img.shape[:2]])
x = model.preprocess(img)
x = torch.from_numpy(x).unsqueeze(0).unsqueeze(0)
x = x.float()
with torch.inference_mode():
coords = model(x, img_shape)
coords = coords[0].numpy()
x, y, w, h = coords
cropped_img = img[y: y + h, x: x + w]
If you have pydicom
installed, you can also load a DICOM image directly:
img = model.load_image_from_dicom(path_to_dicom)
⨠Features
- Standardization: Crops hand radiographs to standardize the image input for bone age models.
- Lightweight Backbone: Utilizes a lightweight
mobilenetv3_small_100
backbone for efficient processing.
- Normalized Coordinates: Predicts normalized
xywh
coordinates for cropping.
đĻ Installation
The installation details are not provided in the original document. If you want to use this model, you may need to install the transformers
library and other necessary dependencies like cv2
, torch
, and pydicom
if not already installed. You can use the following commands:
pip install transformers
pip install opencv-python
pip install torch
pip install pydicom
đģ Usage Examples
Basic Usage
import cv2
import torch
from transformers import AutoModel
model = AutoModel.from_pretrained("ianpan/bone-age-crop", trust_remote_code=True)
model = model.eval()
img = cv2.imread(..., 0)
img_shape = torch.tensor([img.shape[:2]])
x = model.preprocess(img)
x = torch.from_numpy(x).unsqueeze(0).unsqueeze(0)
x = x.float()
with torch.inference_mode():
coords = model(x, img_shape)
coords = coords[0].numpy()
x, y, w, h = coords
cropped_img = img[y: y + h, x: x + w]
Advanced Usage
img = model.load_image_from_dicom(path_to_dicom)
đ Documentation
The model was trained and validated using 12,592 pediatric hand radiographs from the RSNA Pediatric Bone Age Challenge using an 80%/20% split.
On single-fold validation, the model achieved mean absolute errors (normalized coordinates) of:
x: 0.0152
y: 0.0121
w: 0.0261
h: 0.0213
đ§ Technical Details
This model uses a lightweight mobilenetv3_small_100
backbone to predict normalized xywh
coordinates for cropping hand radiographs. The training data is sourced from the RSNA Pediatric Bone Age Challenge, with a clear 80%/20% split for training and validation. The validation results show relatively low mean absolute errors in normalized coordinates, indicating the model's effectiveness and accuracy.
đ License
This model is licensed under the apache-2.0
license.
đ Information Table
Property |
Details |
Model Type |
Object Detection |
Base Model |
timm/mobilenetv3_small_100.lamb_in1k |
Training Data |
12,592 pediatric hand radiographs from the RSNA Pediatric Bone Age Challenge with an 80%/20% split |
License |
apache - 2.0 |
Pipeline Tag |
object - detection |
