Model Overview
Model Features
Model Capabilities
Use Cases
🚀 BiomedCLIP-PubMedBERT_256-vit_base_patch16_224
BiomedCLIP is a biomedical vision - language foundation model. It's pretrained on PMC - 15M, a dataset of 15 million figure - caption pairs from biomedical research articles in PubMed Central, using contrastive learning. It uses PubMedBERT as the text encoder and Vision Transformer as the image encoder with domain - specific adaptations. It can handle various vision - language processing tasks like cross - modal retrieval, image classification, and visual question answering. BiomedCLIP sets new state - of - the - art results in many standard datasets and outperforms previous VLP approaches.
🚀 Quick Start
The following sections will guide you through the training data, model use, reference, limitations, and further information of this model.
✨ Features
- Multimodal Capability: Combines text and image encoders for various vision - language processing tasks.
- State - of - the - Art Results: Establishes new benchmarks in a wide range of standard datasets.
- Domain - Specific Adaptation: Uses PubMedBERT and Vision Transformer with domain - specific adaptations.
📦 Installation
1. Environment
conda create -n biomedclip python=3.10 -y
conda activate biomedclip
pip install open_clip_torch==2.23.0 transformers==4.35.2 matplotlib
💻 Usage Examples
Basic Usage
Load from HF hub
import torch
from urllib.request import urlopen
from PIL import Image
from open_clip import create_model_from_pretrained, get_tokenizer
# Load the model and config files from the Hugging Face Hub
model, preprocess = create_model_from_pretrained('hf-hub:microsoft/BiomedCLIP-PubMedBERT_256-vit_base_patch16_224')
tokenizer = get_tokenizer('hf-hub:microsoft/BiomedCLIP-PubMedBERT_256-vit_base_patch16_224')
# Zero-shot image classification
template = 'this is a photo of '
labels = [
'adenocarcinoma histopathology',
'brain MRI',
'covid line chart',
'squamous cell carcinoma histopathology',
'immunohistochemistry histopathology',
'bone X-ray',
'chest X-ray',
'pie chart',
'hematoxylin and eosin histopathology'
]
dataset_url = 'https://huggingface.co/microsoft/BiomedCLIP-PubMedBERT_256-vit_base_patch16_224/resolve/main/example_data/biomed_image_classification_example_data/'
test_imgs = [
'squamous_cell_carcinoma_histopathology.jpeg',
'H_and_E_histopathology.jpg',
'bone_X-ray.jpg',
'adenocarcinoma_histopathology.jpg',
'covid_line_chart.png',
'IHC_histopathology.jpg',
'chest_X-ray.jpg',
'brain_MRI.jpg',
'pie_chart.png'
]
device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
model.to(device)
model.eval()
context_length = 256
images = torch.stack([preprocess(Image.open(urlopen(dataset_url + img))) for img in test_imgs]).to(device)
texts = tokenizer([template + l for l in labels], context_length=context_length).to(device)
with torch.no_grad():
image_features, text_features, logit_scale = model(images, texts)
logits = (logit_scale * image_features @ text_features.t()).detach().softmax(dim=-1)
sorted_indices = torch.argsort(logits, dim=-1, descending=True)
logits = logits.cpu().numpy()
sorted_indices = sorted_indices.cpu().numpy()
top_k = -1
for i, img in enumerate(test_imgs):
pred = labels[sorted_indices[i][0]]
top_k = len(labels) if top_k == -1 else top_k
print(img.split('/')[-1] + ':')
for j in range(top_k):
jth_index = sorted_indices[i][j]
print(f'{labels[jth_index]}: {logits[i][jth_index]}')
print('\n')
Advanced Usage
Load from local files
import json
from urllib.request import urlopen
from PIL import Image
import torch
from huggingface_hub import hf_hub_download
from open_clip import create_model_and_transforms, get_tokenizer
from open_clip.factory import HF_HUB_PREFIX, _MODEL_CONFIGS
# Download the model and config files
hf_hub_download(
repo_id="microsoft/BiomedCLIP-PubMedBERT_256-vit_base_patch16_224",
filename="open_clip_pytorch_model.bin",
local_dir="checkpoints"
)
hf_hub_download(
repo_id="microsoft/BiomedCLIP-PubMedBERT_256-vit_base_patch16_224",
filename="open_clip_config.json",
local_dir="checkpoints"
)
# Load the model and config files
model_name = "biomedclip_local"
with open("checkpoints/open_clip_config.json", "r") as f:
config = json.load(f)
model_cfg = config["model_cfg"]
preprocess_cfg = config["preprocess_cfg"]
if (not model_name.startswith(HF_HUB_PREFIX)
and model_name not in _MODEL_CONFIGS
and config is not None):
_MODEL_CONFIGS[model_name] = model_cfg
tokenizer = get_tokenizer(model_name)
model, _, preprocess = create_model_and_transforms(
model_name=model_name,
pretrained="checkpoints/open_clip_pytorch_model.bin",
**{f"image_{k}": v for k, v in preprocess_cfg.items()},
)
# Zero-shot image classification
template = 'this is a photo of '
labels = [
'adenocarcinoma histopathology',
'brain MRI',
'covid line chart',
'squamous cell carcinoma histopathology',
'immunohistochemistry histopathology',
'bone X-ray',
'chest X-ray',
'pie chart',
'hematoxylin and eosin histopathology'
]
dataset_url = 'https://huggingface.co/microsoft/BiomedCLIP-PubMedBERT_256-vit_base_patch16_224/resolve/main/example_data/biomed_image_classification_example_data/'
test_imgs = [
'squamous_cell_carcinoma_histopathology.jpeg',
'H_and_E_histopathology.jpg',
'bone_X-ray.jpg',
'adenocarcinoma_histopathology.jpg',
'covid_line_chart.png',
'IHC_histopathology.jpg',
'chest_X-ray.jpg',
'brain_MRI.jpg',
'pie_chart.png'
]
device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
model.to(device)
model.eval()
context_length = 256
images = torch.stack([preprocess(Image.open(urlopen(dataset_url + img))) for img in test_imgs]).to(device)
texts = tokenizer([template + l for l in labels], context_length=context_length).to(device)
with torch.no_grad():
image_features, text_features, logit_scale = model(images, texts)
logits = (logit_scale * image_features @ text_features.t()).detach().softmax(dim=-1)
sorted_indices = torch.argsort(logits, dim=-1, descending=True)
logits = logits.cpu().numpy()
sorted_indices = sorted_indices.cpu().numpy()
top_k = -1
for i, img in enumerate(test_imgs):
pred = labels[sorted_indices[i][0]]
top_k = len(labels) if top_k == -1 else top_k
print(img.split('/')[-1] + ':')
for j in range(top_k):
jth_index = sorted_indices[i][j]
print(f'{labels[jth_index]}: {logits[i][jth_index]}')
print('\n')
Use in Jupyter Notebook
Please refer to this example notebook.
Intended Use
This model is intended to be used solely for (I) future research on visual - language processing and (II) reproducibility of the experimental results reported in the reference paper.
Primary Intended Use
The primary intended use is to support AI researchers building on top of this work. BiomedCLIP and its associated models should be helpful for exploring various biomedical VLP research questions, especially in the radiology domain.
Out - of - Scope Use
Any deployed use case of the model --- commercial or otherwise --- is currently out of scope. Although we evaluated the models using a broad set of publicly - available research benchmarks, the models and evaluations are not intended for deployed use cases. Please refer to the associated paper for more details.
📚 Documentation
Training Data
We have released BiomedCLIP Data Pipeline at https://github.com/microsoft/BiomedCLIP_data_pipeline, which automatically downloads and processes a set of articles from the PubMed Central Open Access dataset. BiomedCLIP builds upon the PMC - 15M dataset, which is a large - scale parallel image - text dataset generated by this data pipeline for biomedical vision - language processing. It contains 15 million figure - caption pairs extracted from biomedical research articles in PubMed Central and covers a diverse range of biomedical image types, such as microscopy, radiography, histology, and more.
Reference
@article{zhang2024biomedclip,
title={A Multimodal Biomedical Foundation Model Trained from Fifteen Million Image–Text Pairs},
author={Sheng Zhang and Yanbo Xu and Naoto Usuyama and Hanwen Xu and Jaspreet Bagga and Robert Tinn and Sam Preston and Rajesh Rao and Mu Wei and Naveen Valluri and Cliff Wong and Andrea Tupini and Yu Wang and Matt Mazzola and Swadheen Shukla and Lars Liden and Jianfeng Gao and Angela Crabtree and Brian Piening and Carlo Bifulco and Matthew P. Lungren and Tristan Naumann and Sheng Wang and Hoifung Poon},
journal={NEJM AI},
year={2024},
volume={2},
number={1},
doi={10.1056/AIoa2400640},
url={https://ai.nejm.org/doi/full/10.1056/AIoa2400640}
}
Limitations
This model was developed using English corpora, and thus can be considered English - only.
Further Information
Please refer to the corresponding paper, "Large - Scale Domain - Specific Pretraining for Biomedical Vision - Language Processing" for additional details on the model training and evaluation.
📄 License
This project is licensed under the MIT license.






