Model Overview
Model Features
Model Capabilities
Use Cases
đ MedGemma Model
MedGemma is a collection of models based on Gemma 3, specifically trained for medical text and image comprehension. It comes in 4B and 27B variants, offering different capabilities for healthcare - based AI application development.
đ Quick Start
How to Use
Below are some example code snippets to help you quickly get started running the model locally on GPU. If you want to use the model at scale, we recommend that you create a production version using Model Garden.
First, install the Transformers library. Gemma 3 is supported starting from transformers 4.50.0.
$ pip install -U transformers
Run model with the pipeline
API
from transformers import pipeline
from PIL import Image
import requests
import torch
pipe = pipeline(
"image-text-to-text",
model="google/medgemma-4b-it",
torch_dtype=torch.bfloat16,
device="cuda",
)
# Image attribution: Stillwaterising, CC0, via Wikimedia Commons
image_url = "https://upload.wikimedia.org/wikipedia/commons/c/c8/Chest_Xray_PA_3-8-2010.png"
image = Image.open(requests.get(image_url, headers={"User-Agent": "example"}, stream=True).raw)
messages = [
{
"role": "system",
"content": [{"type": "text", "text": "You are an expert radiologist."}]
},
{
"role": "user",
"content": [
{"type": "text", "text": "Describe this X-ray"},
{"type": "image", "image": image}
]
}
]
output = pipe(text=messages, max_new_tokens=200)
print(output[0]["generated_text"][-1]["content"])
Run the model directly
# pip install accelerate
from transformers import AutoProcessor, AutoModelForImageTextToText
from PIL import Image
import requests
import torch
model_id = "google/medgemma-4b-it"
model = AutoModelForImageTextToText.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
processor = AutoProcessor.from_pretrained(model_id)
# Image attribution: Stillwaterising, CC0, via Wikimedia Commons
image_url = "https://upload.wikimedia.org/wikipedia/commons/c/c8/Chest_Xray_PA_3-8-2010.png"
image = Image.open(requests.get(image_url, headers={"User-Agent": "example"}, stream=True).raw)
messages = [
{
"role": "system",
"content": [{"type": "text", "text": "You are an expert radiologist."}]
},
{
"role": "user",
"content": [
{"type": "text", "text": "Describe this X-ray"},
{"type": "image", "image": image}
]
}
]
inputs = processor.apply_chat_template(
messages, add_generation_prompt=True, tokenize=True,
return_dict=True, return_tensors="pt"
).to(model.device, dtype=torch.bfloat16)
input_len = inputs["input_ids"].shape[-1]
with torch.inference_mode():
generation = model.generate(**inputs, max_new_tokens=200, do_sample=False)
generation = generation[0][input_len:]
decoded = processor.decode(generation, skip_special_tokens=True)
print(decoded)
⨠Features
- Multiple Variants: MedGemma comes in 4B and 27B variants. The 4B variant supports both text and vision modalities, while the 27B variant is text - only.
- High - Performance: Outperforms the base Gemma 3 models across various multimodal and text - only health benchmarks.
- Versatile Applications: Can be used for a wide range of tasks such as medical image classification, report generation, visual question answering, and text - based tasks.
đĻ Installation
First, install the Transformers library. Gemma 3 is supported starting from transformers 4.50.0.
$ pip install -U transformers
đģ Usage Examples
Examples
See the following Colab notebooks for examples of how to use MedGemma:
- To give the model a quick try, running it locally with weights from Hugging Face, see [Quick start notebook in Colab](https://colab.research.google.com/github/google - health/medgemma/blob/main/notebooks/quick_start_with_hugging_face.ipynb). Note that you will need to use Colab Enterprise to run the 27B model without quantization.
- For an example of fine - tuning the model, see the [Fine - tuning notebook in Colab](https://colab.research.google.com/github/google - health/medgemma/blob/main/notebooks/fine_tune_with_hugging_face.ipynb).
đ Documentation
Model information
Description
MedGemma is a collection of Gemma 3 variants that are trained for performance on medical text and image comprehension. Developers can use MedGemma to accelerate building healthcare - based AI applications. MedGemma currently comes in two variants: a 4B multimodal version and a 27B text - only version.
MedGemma 4B utilizes a SigLIP image encoder that has been specifically pre - trained on a variety of de - identified medical data, including chest X - rays, dermatology images, ophthalmology images, and histopathology slides. Its LLM component is trained on a diverse set of medical data, including radiology images, histopathology patches, ophthalmology images, and dermatology images.
MedGemma 4B is available in both pre - trained (suffix: -pt
) and instruction - tuned (suffix -it
) versions. The instruction - tuned version is a better starting point for most applications. The pre - trained version is available for those who want to experiment more deeply with the models.
MedGemma 27B has been trained exclusively on medical text and optimized for inference - time computation. MedGemma 27B is only available as an instruction - tuned model.
MedGemma variants have been evaluated on a range of clinically relevant benchmarks to illustrate their baseline performance. These include both open benchmark datasets and curated datasets. Developers can fine - tune MedGemma variants for improved performance. Consult the Intended Use section below for more details.
A full technical report will be available soon.
Model architecture overview
The MedGemma model is built based on Gemma 3 and uses the same decoder - only transformer architecture as Gemma 3. To read more about the architecture, consult the Gemma 3 model card.
Technical specifications
Property | Details |
---|---|
Model Type | Decoder - only Transformer architecture, see the [Gemma 3 technical report](https://storage.googleapis.com/deepmind - media/gemma/Gemma3Report.pdf) |
Training Data | Not specified |
Modalities | 4B: Text, vision; 27B: Text only |
Attention mechanism | Utilizes grouped - query attention (GQA) |
Context length | Supports long context, at least 128K tokens |
Key publication | Coming soon |
Model created | May 20, 2025 |
Model version | 1.0.0 |
Inputs and outputs
Input:
- Text string, such as a question or prompt
- Images, normalized to 896 x 896 resolution and encoded to 256 tokens each
- Total input length of 128K tokens
Output:
- Generated text in response to the input, such as an answer to a question, analysis of image content, or a summary of a document
- Total output length of 8192 tokens
Performance and validation
MedGemma was evaluated across a range of different multimodal classification, report generation, visual question answering, and text - based tasks.
Key performance metrics
Imaging evaluations
The multimodal performance of MedGemma 4B was evaluated across a range of benchmarks, focusing on radiology, dermatology, histopathology, ophthalmology, and multimodal clinical reasoning.
MedGemma 4B outperforms the base Gemma 3 4B model across all tested multimodal health benchmarks.
Task and metric | MedGemma 4B | Gemma 3 4B |
---|---|---|
Medical image classification | ||
MIMIC CXR - Average F1 for top 5 conditions | 88.9 | 81.1 |
CheXpert CXR - Average F1 for top 5 conditions | 48.1 | 31.2 |
DermMCQA* - Accuracy | 71.8 | 42.6 |
Visual question answering | ||
SlakeVQA (radiology) - Tokenized F1 | 62.3 | 38.6 |
VQA - Rad** (radiology) - Tokenized F1 | 49.9 | 38.6 |
PathMCQA (histopathology, internal***) - Accuracy | 69.8 | 37.1 |
Knowledge and reasoning | ||
MedXpertQA (text + multimodal questions) - Accuracy | 18.8 | 16.4 |
*Described in [Liu (2020, Nature medicine)](https://www.nature.com/articles/s41591 - 020 - 0842 - 3), presented as a 4 - way MCQ per example for skin condition classification.
**Based on "balanced split," described in Yang (2024, arXiv).
***Based on multiple datasets, presented as 3 - 9 way MCQ per example for identification, grading, and subtype for breast, cervical, and prostate cancer.
Chest X - ray report generation
MedGemma chest X - ray (CXR) report generation performance was evaluated on [MIMIC - CXR](https://physionet.org/content/mimic - cxr/2.1.0/) using the RadGraph F1 metric. We compare the MedGemma pre - trained checkpoint with our previous best model for CXR report generation, PaliGemma 2.
Metric | MedGemma 4B (pre - trained) | PaliGemma 2 3B (tuned for CXR) | PaliGemma 2 10B (tuned for CXR) |
---|---|---|---|
Chest X - ray report generation | |||
MIMIC CXR - RadGraph F1 | 29.5 | 28.8 | 29.5 |
The instruction - tuned versions of MedGemma 4B and Gemma 3 4B achieve lower scores (0.22 and 0.12, respectively) due to the differences in reporting style compared to the MIMIC ground truth reports. Further fine - tuning on MIMIC reports will enable users to achieve improved performance.
Text evaluations
MedGemma 4B and text - only MedGemma 27B were evaluated across a range of text - only benchmarks for medical knowledge and reasoning.
The MedGemma models outperform their respective base Gemma models across all tested text - only health benchmarks.
Metric | MedGemma 27B | Gemma 3 27B | MedGemma 4B | Gemma 3 4B |
---|---|---|---|---|
MedQA (4 - op) | 89.8 (best - of - 5) 87.7 (0 - shot) | 74.9 | 64.4 | 50.7 |
MedMCQA | 74.2 | 62.6 | 55.7 | 45.4 |
PubMedQA | 76.8 | 73.4 | 73.4 | 68.4 |
MMLU Med (text only) | 87.0 | 83.3 | 70.0 | 67.2 |
MedXpertQA (text only) | 26.7 | 15.7 | 14.2 | 11.6 |
AfriMed - QA | 84.0 | 72.0 | 52.0 | 48.0 |
For all MedGemma 27B results, test - time scaling is used to improve performance.
Ethics and safety evaluation
Evaluation approach
Our evaluation methods include structured evaluations and internal red - teaming testing of relevant content policies. Red - teaming was conducted by a number of different teams, each with different goals and human evaluation metrics. These models were evaluated against a number of different categories relevant to ethics and safety, including:
- Child safety: Evaluation of text - to - text and image - to - text prompts covering child safety policies, including child sexual abuse and exploitation.
- Content safety: Evaluation of text - to - text and image - to - text prompts covering safety policies, including harassment, violence and gore, and hate speech.
- Representational harms: Evaluation of text - to - text and image - to - text prompts covering safety policies, including bias, stereotyping, and harmful associations or inaccuracies.
- General medical harms: Evaluation of text - to - text and...
đ§ Technical Details
- Model Architecture: Based on the decoder - only transformer architecture of Gemma 3.
- Training: The 4B variant is trained on a combination of medical text and image data, while the 27B variant is trained exclusively on medical text.
- Optimization: The 27B variant is optimized for inference - time computation.
đ License
The use of MedGemma is governed by the [Health AI Developer Foundations terms of use](https://developers.google.com/health - ai - developer - foundations/terms).
â ī¸ Important Note
To access MedGemma on Hugging Face, you're required to review and agree to [Health AI Developer Foundation's terms of use](https://developers.google.com/health - ai - developer - foundations/terms). To do this, please ensure you're logged in to Hugging Face and click below. Requests are processed immediately.
Model documentation: [MedGemma](https://developers.google.com/health - ai - developer - foundations/medgemma)
Resources:
- Model on Google Cloud Model Garden: [MedGemma](https://console.cloud.google.com/vertex - ai/publishers/google/model - garden/medgemma)
- Model on Hugging Face: [MedGemma](https://huggingface.co/collections/google/medgemma - release - 680aade845f90bec6a3f60c4)
- GitHub repository (supporting code, Colab notebooks, discussions, and issues): [MedGemma](https://github.com/google - health/medgemma)
- Quick start notebook: [GitHub](https://github.com/google - health/medgemma/blob/main/notebooks/quick_start_with_hugging_face.ipynb)
- Fine - tuning notebook: [GitHub](https://github.com/google - health/medgemma/blob/main/notebooks/fine_tune_with_hugging_face.ipynb)
- Patient Education Demo built using MedGemma
- Support: See [Contact](https://developers.google.com/health - ai - developer - foundations/medgemma/get - started.md#contact)
Citation
A technical report is coming soon. In the meantime, if you publish using this model, please cite the Hugging Face model page:
@misc{medgemma - hf,
author = {Google},
title = {MedGemma Hugging Face},
howpublished = {\url{https://huggingface.co/collections/google/medgemma - release - 680aade845f90bec6a3f60c4}},
year = {2025},
note = {Accessed: [Insert Date Accessed, e.g., 2025 - 05 - 20]}
}






