W

Whyxrayclip

Developed by yyupenn
WhyXrayCLIP is a model capable of aligning X-ray images with text descriptions, fine-tuned on the MIMIC-CXR dataset based on OpenCLIP (ViT-L/14), with clinical reports processed by GPT-4.
Downloads 103
Release Time : 5/22/2024

Model Overview

WhyXrayCLIP demonstrates excellent performance in zero-shot and linear probing tasks across multiple chest X-ray datasets, significantly outperforming models like PubMedCLIP and BioMedCLIP.

Model Features

Zero-shot X-ray classification
Capable of classifying X-ray images without additional training.
High performance
Significantly outperforms other models in zero-shot and linear probing tasks across multiple chest X-ray datasets.
Multilingual support
Supports text descriptions in both Chinese and English.

Model Capabilities

X-ray image classification
Image-text alignment
Zero-shot learning

Use Cases

Medical image analysis
Cardiomegaly detection
Using WhyXrayCLIP to detect cardiomegaly in X-ray images.
Performs excellently on multiple datasets.
Pleural effusion detection
Using WhyXrayCLIP to detect pleural effusion in X-ray images.
Performs excellently on multiple datasets.
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
Ā© 2025AIbase