W

Whylesionclip

Developed by yyupenn
WhyLesionCLIP is a fine-tuned model based on OpenCLIP (ViT-L/14) for aligning skin lesion images with text descriptions, trained on the ISIC dataset, supporting zero-shot skin lesion classification.
Downloads 339
Release Time : 6/6/2024

Model Overview

This model can align skin lesion images with text descriptions, primarily used for zero-shot medical image (skin lesion) classification research and can serve as a feature extractor for downstream tasks.

Model Features

Medical image alignment
Capable of precisely aligning skin lesion images with clinical text descriptions, supporting medical image understanding.
Zero-shot classification
Classifies new categories of skin lesions without additional training, demonstrating strong adaptability.
Cross-modal feature extraction
Can simultaneously extract image and text features, supporting multimodal medical research.

Model Capabilities

Skin lesion image classification
Medical image-text alignment
Zero-shot learning
Cross-modal feature extraction

Use Cases

Medical research
Skin lesion classification
Performs zero-shot classification of skin lesion images using text prompts.
Significantly outperforms other CLIP variant models on multiple skin lesion datasets.
Feature extraction
Extracts skin lesion image features for downstream medical analysis tasks.
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
Š 2025AIbase