Model Overview
Model Features
Model Capabilities
Use Cases
đ MedGemma
MedGemma is a collection of Gemma 3 variants trained for medical text and image comprehension, which can help developers accelerate the development of healthcare - based AI applications.
đ Quick Start
This section describes the MedGemma model and how to use it.
Description
MedGemma is a collection of Gemma 3 variants that are trained for performance on medical text and image comprehension. Developers can use MedGemma to accelerate building healthcare - based AI applications. MedGemma currently comes in two variants: a 4B multimodal version and a 27B text - only version.
MedGemma 27B has been trained exclusively on medical text and optimized for inference - time computation. MedGemma 27B is only available as an instruction - tuned model.
MedGemma variants have been evaluated on a range of clinically relevant benchmarks to illustrate their baseline performance. These include both open benchmark datasets and curated datasets. Developers can fine - tune MedGemma variants for improved performance. Consult the Intended Use section below for more details.
A full technical report will be available soon.
How to use
Below are some example code snippets to help you quickly get started running the model locally on GPU. If you want to use the model at scale, we recommend that you create a production version using [Model Garden](https://cloud.google.com/model - garden).
First, install the Transformers library. Gemma 3 is supported starting from transformers 4.50.0.
$ pip install -U transformers
đģ Usage Examples
Basic Usage
from transformers import pipeline
import torch
pipe = pipeline(
"text-generation",
model="google/medgemma-27b-text-it",
torch_dtype=torch.bfloat16,
device="cuda",
)
messages = [
{
"role": "system",
"content": "You are a helpful medical assistant."
},
{
"role": "user",
"content": "How do you differentiate bacterial from viral pneumonia?"
}
]
output = pipe(text=messages, max_new_tokens=200)
print(output[0]["generated_text"][-1]["content"])
Advanced Usage
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "google/medgemma-27b-text-it"
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
tokenizer = AutoTokenizer.from_pretrained(model_id)
messages = [
{
"role": "system",
"content": "You are a helpful medical assistant."
},
{
"role": "user",
"content": "How do you differentiate bacterial from viral pneumonia?"
}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
input_len = inputs["input_ids"].shape[-1]
with torch.inference_mode():
generation = model.generate(**inputs, max_new_tokens=200, do_sample=False)
generation = generation[0][input_len:]
decoded = tokenizer.decode(generation, skip_special_tokens=True)
print(decoded)
Examples
See the following Colab notebooks for examples of how to use MedGemma:
-
To give the model a quick try, running it locally with weights from Hugging Face, see [Quick start notebook in Colab](https://colab.research.google.com/github/google - health/medgemma/blob/main/notebooks/quick_start_with_hugging_face.ipynb). Note that you will need to use Colab Enterprise to run the 27B model without quantization.
-
For an example of fine - tuning the model, see the [Fine - tuning notebook in Colab](https://colab.research.google.com/github/google - health/medgemma/blob/main/notebooks/fine_tune_with_hugging_face.ipynb).
đ Documentation
Model architecture overview
The MedGemma model is built based on Gemma 3 and uses the same decoder - only transformer architecture as Gemma 3. To read more about the architecture, consult the Gemma 3 model card.
Technical specifications
Property | Details |
---|---|
Model Type | Decoder - only Transformer architecture, see the [Gemma 3 technical report](https://storage.googleapis.com/deepmind - media/gemma/Gemma3Report.pdf) |
Modalities | 4B: Text, vision; 27B: Text only |
Attention mechanism | Utilizes grouped - query attention (GQA) |
Context length | Supports long context, at least 128K tokens |
Key publication | Coming soon |
Model created | May 20, 2025 |
Model version | 1.0.0 |
Citation
A technical report is coming soon. In the meantime, if you publish using this model, please cite the Hugging Face model page:
@misc{medgemma-hf,
author = {Google},
title = {MedGemma Hugging Face}
howpublished = {\url{https://huggingface.co/collections/google/medgemma - release - 680aade845f90bec6a3f60c4}},
year = {2025},
note = {Accessed: [Insert Date Accessed, e.g., 2025 - 05 - 20]}
}
Inputs and outputs
Input:
- Text string, such as a question or prompt
- Total input length of 128K tokens
Output:
- Generated text in response to the input, such as an answer to a question, analysis of image content, or a summary of a document
- Total output length of 8192 tokens
Performance and validation
MedGemma was evaluated across a range of different multimodal classification, report generation, visual question answering, and text - based tasks.
Key performance metrics
Text evaluations
MedGemma 4B and text - only MedGemma 27B were evaluated across a range of text - only benchmarks for medical knowledge and reasoning.
The MedGemma models outperform their respective base Gemma models across all tested text - only health benchmarks.
Metric | MedGemma 27B | Gemma 3 27B | MedGemma 4B | Gemma 3 4B |
---|---|---|---|---|
MedQA (4 - op) | 89.8 (best - of - 5) 87.7 (0 - shot) | 74.9 | 64.4 | 50.7 |
MedMCQA | 74.2 | 62.6 | 55.7 | 45.4 |
PubMedQA | 76.8 | 73.4 | 73.4 | 68.4 |
MMLU Med (text only) | 87.0 | 83.3 | 70.0 | 67.2 |
MedXpertQA (text only) | 26.7 | 15.7 | 14.2 | 11.6 |
AfriMed - QA | 84.0 | 72.0 | 52.0 | 48.0 |
For all MedGemma 27B results, test - time scaling is used to improve performance.
Ethics and safety evaluation
Evaluation approach
Our evaluation methods include structured evaluations and internal red - teaming testing of relevant content policies. Red - teaming was conducted by a number of different teams, each with different goals and human evaluation metrics. These models were evaluated against a number of different categories relevant to ethics and safety, including:
- Child safety: Evaluation of text - to - text and image - to - text prompts covering child safety policies, including child sexual abuse and exploitation.
- Content safety: Evaluation of text - to - text and image - to - text prompts covering safety policies, including harassment, violence and gore, and hate speech.
- Representational harms: Evaluation of text - to - text and image - to - text prompts covering safety policies, including bias, stereotyping, and harmful associations or inaccuracies.
- General medical harms: Evaluation of text - to - text and image - to - text prompts covering safety policies, including information quality and harmful associations or inaccuracies.
In addition to development level evaluations, we conduct "assurance evaluations" which are our "arms - length" internal evaluations for responsibility governance decision making. They are conducted separately from the model development team, to inform decision making about release. High - level findings are fed back to the model team, but prompt sets are held out to prevent overfitting and preserve the results' ability to inform decision making. Notable assurance evaluation results are reported to our Responsibility & Safety Council as part of release review.
Evaluation results
For all areas of safety testing, we saw safe levels of performance across the categories of child safety, content safety, and representational harms. All testing was conducted without safety filters to evaluate the model capabilities and behaviors. For text - to - text, image - to - text, and audio - to - text, and across both MedGemma model sizes, the model produced minimal policy violations. A limitation of our evaluations was that they included primarily English language prompts.
đ Documentation
Dataset overview
Training
The base Gemma models are pre - trained on a large corpus of text and code data. MedGemma 4B utilizes a SigLIP image encoder that has been specifically pre - trained on a variety of de - identified medical data, including radiology images, histopathology images, ophthalmology images, and dermatology images. Its LLM component is trained on a diverse set of medical data, including medical text relevant to radiology images, chest - x rays, histopathology patches, ophthalmology images and dermatology images.
Evaluation
MedGemma models have been evaluated on a comprehensive set of clinically relevant benchmarks, including over 22 datasets across 5 different tasks and 6 medical image modalities. These include both open benchmark datasets and curated datasets, with a focus on expert human evaluations for tasks like CXR report generation and radiology VQA.
Source
MedGemma utilizes a combination of public and private datasets.
This model was trained on diverse public datasets including MIMIC - CXR (chest X - rays and reports), Slake - VQA (multimodal medical images and questions), PAD - UFES - 20 (skin lesion images and data), SCIN (dermatology images), TCGA (cancer genomics data), CAMELYON (lymph node histopathology images), PMC - OA (biomedical literature with images), and Mendeley Digital Knee X - Ray (knee X - rays).
Additionally, multiple diverse proprietary datasets were licensed and incorporated (described next).
Data Ownership and Documentation
- [Mimic - CXR](https://physionet.org/content/mimic - cxr/2.1.0/): MIT Laboratory for Computational Physiology and Beth Israel Deaconess Medical Center (BIDMC).
- [Slake - VQA](https://www.med - vqa.com/slake/): The Hong Kong Polytechnic University
đ License
To access MedGemma on Hugging Face, you're required to review and agree to [Health AI Developer Foundation's terms of use](https://developers.google.com/health - ai - developer - foundations/terms). To do this, please ensure you're logged in to Hugging Face and click below. Requests are processed immediately. The use of MedGemma is governed by the [Health AI Developer Foundations terms of use](https://developers.google.com/health - ai - developer - foundations/terms).
Unsloth Dynamic 2.0 achieves superior accuracy & outperforms other leading quants.

