Model Overview
Model Features
Model Capabilities
Use Cases
🚀 PaliGemma model card
PaliGemma is a versatile vision-language model that combines image and text inputs to generate diverse text outputs. It's fine - tuned on specific datasets and offers various formats for research. This model has wide - ranging applications in tasks like image captioning, question - answering, and object segmentation.
Model page: PaliGemma
Transformers PaliGemma 3B weights, fine - tuned with 224*224 input images on the Widget_Captioning dataset. The models are available in float32, bfloat16 and float16 format for research purposes only. The fine - tune config is available at big_vision.
Resources and technical documentation:
Terms of Use: Terms
Authors: Google
✨ Features
Model information
Model summary
Description
PaliGemma is a versatile and lightweight vision - language model (VLM) inspired by PaLI - 3 and based on open components such as the SigLIP vision model and the Gemma language model. It takes both image and text as input and generates text as output, supporting multiple languages. It is designed for class - leading fine - tune performance on a wide range of vision - language tasks such as image and short video caption, visual question answering, text reading, object detection and object segmentation.
Model architecture
PaliGemma is the composition of a Transformer decoder and a Vision Transformer image encoder, with a total of 3 billion params. The text decoder is initialized from Gemma - 2B. The image encoder is initialized from SigLIP - So400m/14. PaliGemma is trained following the PaLI - 3 recipes.
Inputs and outputs
- Input: Image and text string, such as a prompt to caption the image, or a question.
- Output: Generated text in response to the input, such as a caption of the image, an answer to a question, a list of object bounding box coordinates, or segmentation codewords.
Model data
Pre - train datasets
PaliGemma is pre - trained on the following mixture of datasets:
- WebLI: WebLI (Web Language Image) is a web - scale multilingual image - text dataset built from the public web. A wide range of WebLI splits are used to acquire versatile model capabilities, such as visual semantic understanding, object localization, visually - situated text understanding, multilinguality, etc.
- CC3M - 35L: Curated English image - alt_text pairs from webpages (Sharma et al., 2018). We used the Google Cloud Translation API to translate into 34 additional languages.
- VQ²A - CC3M - 35L/VQG - CC3M - 35L: A subset of VQ2A - CC3M (Changpinyo et al., 2022a), translated into the same additional 34 languages as CC3M - 35L, using the Google Cloud Translation API.
- OpenImages: Detection and object - aware questions and answers (Piergiovanni et al. 2022) generated by handcrafted rules on the OpenImages dataset.
- WIT: Images and texts collected from Wikipedia (Srinivasan et al., 2021).
Data responsibility filtering
The following filters are applied to WebLI, with the goal of training PaliGemma on clean data:
- Pornographic image filtering: This filter removes images deemed to be of pornographic nature.
- Text safety filtering: We identify and filter out images that are paired with unsafe text. Unsafe text is any text deemed to contain or be about CSAI, pornography, vulgarities, or otherwise offensive.
- Text toxicity filtering: We further use the Perspective API to identify and filter out images that are paired with text deemed insulting, obscene, hateful or otherwise toxic.
- Text personal information filtering: We filtered certain personal information and other sensitive data using Cloud Data Loss Prevention (DLP) API to protect the privacy of individuals. Identifiers such as social security numbers and other sensitive information types were removed.
- Additional methods: Filtering based on content quality and safety in line with our policies and practices.
🚀 Quick Start
PaliGemma is a single - turn vision language model not meant for conversational use, and it works best when fine - tuning to a specific use case.
You can configure which task the model will solve by conditioning it with task prefixes, such as “detect” or “segment”. The pretrained models were trained in this fashion to imbue them with a rich set of capabilities (question answering, captioning, segmentation, etc.). However, they are not designed to be used directly, but to be transferred (by fine - tuning) to specific tasks using a similar prompt structure. For interactive testing, you can use the "mix" family of models, which have been fine - tuned on a mixture of tasks.
Please, refer to the usage and limitations section for intended use cases, or visit the blog post for additional details and examples.
💻 Usage Examples
Basic Usage
Running the default precision (float32
) on CPU
from transformers import AutoProcessor, PaliGemmaForConditionalGeneration
from PIL import Image
import requests
import torch
model_id = "google/paligemma-3b-mix-224"
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg?download=true"
image = Image.open(requests.get(url, stream=True).raw)
model = PaliGemmaForConditionalGeneration.from_pretrained(model_id).eval()
processor = AutoProcessor.from_pretrained(model_id)
# Instruct the model to create a caption in Spanish
prompt = "caption es"
model_inputs = processor(text=prompt, images=image, return_tensors="pt")
input_len = model_inputs["input_ids"].shape[-1]
with torch.inference_mode():
generation = model.generate(**model_inputs, max_new_tokens=100, do_sample=False)
generation = generation[0][input_len:]
decoded = processor.decode(generation, skip_special_tokens=True)
print(decoded)
Advanced Usage
Running other precisions on CUDA
from transformers import AutoProcessor, PaliGemmaForConditionalGeneration
from PIL import Image
import requests
import torch
model_id = "google/paligemma-3b-mix-224"
device = "cuda:0"
dtype = torch.bfloat16
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg?download=true"
image = Image.open(requests.get(url, stream=True).raw)
model = PaliGemmaForConditionalGeneration.from_pretrained(
model_id,
torch_dtype=dtype,
device_map=device,
revision="bfloat16",
).eval()
processor = AutoProcessor.from_pretrained(model_id)
# Instruct the model to create a caption in Spanish
prompt = "caption es"
model_inputs = processor(text=prompt, images=image, return_tensors="pt").to(model.device)
input_len = model_inputs["input_ids"].shape[-1]
with torch.inference_mode():
generation = model.generate(**model_inputs, max_new_tokens=100, do_sample=False)
generation = generation[0][input_len:]
decoded = processor.decode(generation, skip_special_tokens=True)
print(decoded)
Loading in 4 - bit / 8 - bit
You need to install bitsandbytes
to automatically run inference using 8 - bit or 4 - bit precision:
pip install bitsandbytes accelerate
from transformers import AutoProcessor, PaliGemmaForConditionalGeneration
from PIL import Image
import requests
import torch
model_id = "google/paligemma-3b-mix-224"
device = "cuda:0"
dtype = torch.bfloat16
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg?download=true"
image = Image.open(requests.get(url, stream=True).raw)
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
model = PaliGemmaForConditionalGeneration.from_pretrained(
model_id, quantization_config=quantization_config
).eval()
processor = AutoProcessor.from_pretrained(model_id)
# Instruct the model to create a caption in Spanish
prompt = "caption es"
model_inputs = processor(text=prompt, images=image, return_tensors="pt").to(model.device)
input_len = model_inputs["input_ids"].shape[-1]
with torch.inference_mode():
generation = model.generate(**model_inputs, max_new_tokens=100, do_sample=False)
generation = generation[0][input_len:]
decoded = processor.decode(generation, skip_special_tokens=True)
print(decoded)
🔧 Technical Details
Hardware
PaliGemma was trained using the latest generation of Tensor Processing Unit (TPU) hardware (TPUv5e).
Software
Training was done using JAX, Flax, TFDS and big_vision
.
JAX allows researchers to take advantage of the latest generation of hardware, including TPUs, for faster and more efficient training of large models.
TFDS is used to access datasets and Flax is used for model architecture. The PaliGemma fine - tune code and inference code are released in the big_vision
GitHub repository.
📚 Documentation
Benchmark results
In order to verify the transferability of PaliGemma to a wide variety of academic tasks, we fine - tune the pretrained models on each task. Additionally we train the mix model with a mixture of the transfer tasks. We report results on different resolutions to provide an impression of which tasks benefit from increased resolution. Importantly, none of these tasks or datasets are part of the pretraining data mixture, and their images are explicitly removed from the web - scale pre - training data.
Mix model (fine - tune on mixture of transfer tasks)
Benchmark | Metric (split) | mix - 224 | mix - 448 |
---|---|---|---|
MMVP | Paired Accuracy | 46.00 | 45.33 |
POPE | Accuracy (random/popular/adversarial) |
88.00 86.63 85.67 |
89.37 88.40 87.47 |
GQA | Accuracy (test) | 65.20 | 65.47 |
Single task (fine - tune on single task)
Benchmark (train split) |
Metric (split) |
pt - 224 | pt - 448 | pt - 896 |
---|---|---|---|---|
Captioning | ||||
COCO captions (train+restval) |
CIDEr (val) | 141.92 | 144.60 | |
NoCaps (Eval of COCO captions transfer) |
CIDEr (val) | 121.72 | 123.58 | |
COCO - 35L (train) |
CIDEr dev (en/avg - 34/avg) |
139.2 115.8 116.4 |
141.2 118.0 118.6 |
|
XM3600 (Eval of COCO - 35L transfer) |
CIDEr dev (en/avg - 34/avg) |
78.1 41.3 42.4 |
80.0 41.9 42.9 |
|
TextCaps (train) |
CIDEr (val) | 127.48 | 153.94 | |
SciCap (first sentence, no subfigure) (train+val) |
CIDEr/BLEU - 4 (test) |
162.25 0.192 |
181.49 0.211 |
|
Screen2words (train+dev) |
CIDEr (test) | 117.57 | 119.59 | |
Widget Captioning (train+dev) |
CIDEr (test) | 136.07 | 148.36 | |
Question answering | ||||
VQAv2 (train+validation) |
Accuracy (Test server - std) |
83.19 | 85.64 | |
MMVP (Eval of VQAv2 transfer) |
Paired Accuracy | 47.33 | 45.33 | |
POPE (Eval of VQAv2 transfer) |
Accuracy (random/popular/ adversarial) |
87.80 85.87 84.27 |
88.23 86.77 85.90 |
|
OKVQA (train) |
Accuracy (val) | 63.54 | 63.15 | |
A - OKVQA (MC) (train+val) |
Accuracy (Test server) |
76.37 | 76.90 | |
A - OKVQA (DA) (train+val) |
Accuracy (Test server) |
61.85 | 63.22 | |
GQA (train_balanced+ val_balanced) |
Accuracy (testdev balanced) |
65.61 | 67.03 | |
xGQA (Eval of GQA transfer) |
Mean Accuracy (bn, de, en, id, ko, pt, ru, zh) |
58.37 | 59.07 | |
NLVR2 (train+dev) |
Accuracy (test) | 90.02 | 88.93 | |
MaRVL (Eval of NLVR2 transfer) |
Mean Accuracy (test) (id, sw, ta, tr, zh) |
80.57 | 76.78 | |
AI2D (train) |
Accuracy (test) | 72.12 | 73.28 | |
ScienceQA (Img subset, no CoT) (train+val) |
Accuracy (test) | 95.39 | 95.93 | |
RSVQA - LR (Non numeric) (train+val) |
Mean Accuracy (test) |
92.65 | 93.11 | |
RSVQA - HR (Non numeric) (train+val) |
Mean Accuracy (test/test2) |
92.61 90.58 |
92.79 90.54 |
|
ChartQA (human+aug)x(train+val) |
Mean Relaxed Accuracy (test_human, test_aug) |
57.08 | 71.36 | |
VizWiz VQA (train+val) |
Accuracy (Test server - std) |
73.7 | 75.52 | |
TallyQA (train) |
Accuracy (test_simple/ test_complex) |
81.72 69.56 |
84.86 72.27 |
|
OCR - VQA (train+val) |
Accuracy (test) | 72.32 | 74.61 | 74.93 |
TextVQA (train+val) |
Accuracy (Test server - std) |
55.47 | 73.15 | 76.48 |
DocVQA (train+val) |
ANLS (Test server) | 43.74 | 78.02 | 84.77 |
Infographic VQA (train+val) |
ANLS (Test server) | 28.46 | 40.47 | 47.75 |
SceneText VQA (train+val) |
ANLS (Test server) | 63.29 | 81.82 | 84.40 |
Segmentation | ||||
RefCOCO (combined refcoco, refcoco+, refcocog excluding val and test images) |
MIoU (validation) refcoco/refcoco+/ refcocog |
73.40 68.32 67.65 |
75.57 69.76 70.17 |
76.94 72.18 72.22 |
📄 License
The license for this model is gemma.
⚠️ Important Note
To access PaliGemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged - in to Hugging Face and click below. Requests are processed immediately.
💡 Usage Tip
PaliGemma is a single - turn vision language model not meant for conversational use, and it works best when fine - tuning to a specific use case. You can configure which task the model will solve by conditioning it with task prefixes, such as “detect” or “segment”.






