🚀 PaliGemma 2 model card
PaliGemma 2 is an updated vision - language model that combines the capabilities of Gemma 2 models. It can take both images and text as input and generate text output, supporting multiple languages, and is suitable for a wide range of vision - language tasks.
🚀 Quick Start
The following snippet uses model google/paligemma2-3b-pt-448
for reference purposes. It is a base model and is recommended to use after fine - tuning it on a downstream task.
Here is a notebook that showcases fine - tuning PaliGemma 2.
from transformers import (
PaliGemmaProcessor,
PaliGemmaForConditionalGeneration,
)
from transformers.image_utils import load_image
import torch
model_id = "google/paligemma2-3b-pt-448"
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg"
image = load_image(url)
model = PaliGemmaForConditionalGeneration.from_pretrained(model_id, torch_dtype=torch.bfloat16, device_map="auto").eval()
processor = PaliGemmaProcessor.from_pretrained(model_id)
prompt = ""
model_inputs = processor(text=prompt, images=image, return_tensors="pt").to(torch.bfloat16).to(model.device)
input_len = model_inputs["input_ids"].shape[-1]
with torch.inference_mode():
generation = model.generate(**model_inputs, max_new_tokens=100, do_sample=False)
generation = generation[0][input_len:]
decoded = processor.decode(generation, skip_special_tokens=True)
print(decoded)
✨ Features
Model information
Model summary
PaliGemma 2 is an update of the PaliGemma vision - language model (VLM) which incorporates the capabilities of the Gemma 2 models. The PaliGemma family of models is inspired by PaLI - 3 and based on open components such as the SigLIP vision model and Gemma 2 language models. It takes both image and text as input and generates text as output, supporting multiple languages. It is designed for class - leading fine - tune performance on a wide range of vision - language tasks such as image and short video caption, visual question answering, text reading, object detection and object segmentation.
Model architecture
PaliGemma 2 is the composition of a Transformer decoder and a Vision Transformer image encoder. The text decoder is initialized from Gemma 2 in the 2B, 9B, and 27B parameter sizes. The image encoder is initialized from SigLIP - So400m/14. Similar to the original PaliGemma model, PaliGemma 2 is trained following the PaLI - 3 recipes.
Inputs and outputs
- Input: Image and text string, such as a prompt to caption the image, or a question.
- Output: Generated text in response to the input, such as a caption of the image, an answer to a question, a list of object bounding box coordinates, or segmentation codewords.
Citation
@article{
title={PaliGemma 2: A Family of Versatile VLMs for Transfer},
author={Andreas Steiner and André Susano Pinto and Michael Tschannen and Daniel Keysers and Xiao Wang and Yonatan Bitton and Alexey Gritsenko and Matthias Minderer and Anthony Sherbondy and Shangbang Long and Siyang Qin and Reeve Ingle and Emanuele Bugliarello and Sahar Kazemzadeh and Thomas Mesnard and Ibrahim Alabdulmohsin and Lucas Beyer and Xiaohua Zhai},
year={2024},
journal={arXiv preprint arXiv:2412.03555}
}
Model data
Pre - train datasets
PaliGemma 2 is pre - trained on the following mixture of datasets:
PaliGemma 2 is based on Gemma 2, and you can find information on the pre - training datasets for Gemma 2 in the Gemma 2 model card.
Data responsibility filtering
The following filters are applied to WebLI, with the goal of training PaliGemma 2 on safe and responsible data:
- Pornographic image filtering: This filter removes images deemed to be of pornographic nature.
- Text safety filtering: We identify and filter out images that are paired with unsafe text. Unsafe text is any text deemed to contain or be about child sexual abuse imagery (CSAI), pornography, vulgarities, or is otherwise offensive.
- Text toxicity filtering: We further use the Perspective API to identify and filter out images that are paired with text deemed insulting, obscene, hateful or otherwise toxic.
- Text personal information filtering: We filtered certain personal information and other sensitive data using the Cloud Data Loss Prevention (DLP) API to protect the privacy of individuals. Identifiers such as social security numbers and other sensitive information types were removed.
- Additional methods: Filtering based on content quality and safety in line with our policies and practices.
🔧 Technical Details
Hardware
PaliGemma 2 was trained using the latest generation of Tensor Processing Unit (TPU) hardware (TPUv5e).
Software
Training was completed using JAX, Flax, TFDS and big_vision
.
JAX allows researchers to take advantage of the latest generation of hardware, including TPUs, for faster and more efficient training of large models.
TFDS is used to access datasets and Flax is used for model architecture. The PaliGemma 2 fine - tune code and inference code are released in the big_vision
GitHub repository.
📚 Documentation
Benchmark results
In order to verify the transferability of PaliGemma 2 to a wide variety of academic tasks, we fine - tune the pretrained models on each task. We report results on different resolutions to provide an impression of which tasks benefit from increased resolution. Importantly, none of these tasks or datasets are part of the pretraining data mixture, and their images are explicitly removed from the web - scale pre - training data.
PaliGemma 2 results by model resolution and size
Benchmark |
224 - 3B |
224 - 10B |
224 - 28B |
448 - 3B |
448 - 10B |
448 - 28B |
[AI2D][ai2d] |
74.7 |
83.1 |
83.2 |
76.0 |
84.4 |
84.6 |
[AOKVQA - DA][aokvqa - da] (val) |
64.2 |
68.9 |
70.2 |
67.9 |
70.8 |
71.2 |
[AOKVQA - MC][aokvqa - mc] (val) |
79.7 |
83.7 |
84.7 |
82.5 |
85.9 |
87.0 |
[ActivityNet - CAP][anet - cap] |
34.2 |
35.9 |
- |
- |
- |
- |
[ActivityNet - QA][anet - qa] |
51.3 |
53.2 |
- |
- |
- |
- |
[COCO - 35L][coco - 35l] (avg34) |
113.9 |
115.8 |
116.5 |
115.8 |
117.2 |
117.2 |
[COCO - 35L][coco - 35l] (en) |
138.4 |
140.8 |
142.4 |
140.4 |
142.4 |
142.3 |
[COCOcap][coco - cap] |
141.3 |
143.7 |
144.0 |
143.4 |
145.0 |
145.2 |
[ChartQA][chartqa] (aug) |
74.4 |
74.2 |
68.9 |
89.2 |
90.1 |
85.1 |
[ChartQA][chartqa] (human) |
42.0 |
48.4 |
46.8 |
54.0 |
66.4 |
61.3 |
[CountBenchQA][countbenchqa] |
81.0 |
84.0 |
86.4 |
82.0 |
85.3 |
87.4 |
[DocVQA][docvqa] (val) |
39.9 |
43.9 |
44.9 |
73.6 |
76.6 |
76.1 |
[GQA][gqa] |
66.2 |
67.2 |
67.3 |
68.1 |
68.3 |
68.3 |
[InfoVQA][info - vqa] (val) |
25.2 |
33.6 |
36.4 |
37.5 |
47.8 |
46.7 |
[MARVL][marvl] (avg5) |
83.5 |
89.5 |
90.6 |
82.7 |
89.1 |
89.7 |
[MSRVTT - CAP][msrvtt] |
68.5 |
72.1 |
- |
- |
- |
- |
[MSRVTT - QA][msrvtt] |
50.5 |
51.9 |
- |
- |
- |
- |
[MSVD - QA][msvd - qa] |
61.1 |
62.5 |
- |
- |
- |
- |
[NLVR2][nlvr2] |
91.4 |
93.9 |
94.2 |
91.6 |
93.7 |
94.1 |
[NoCaps][nocaps] |
123.1 |
126.3 |
127.1 |
123.5 |
126.9 |
127.0 |
[OCR - VQA][ocr - vqa] |
73.4 |
74.7 |
75.3 |
75.7 |
76.3 |
76.6 |
[OKVQA][okvqa] |
64.2 |
68.0 |
71.2 |
64.1 |
68.6 |
70.6 |
[RSVQA - hr][rsvqa - hr] (test) |
92.7 |
92.6 |
92.7 |
92.8 |
92.8 |
92.8 |
[RSVQA - hr][rsvqa - hr] (test2) |
90.9 |
90.8 |
90.9 |
90.7 |
90.7 |
90.8 |
[RSVQA - lr][rsvqa - lr] |
93.0 |
92.8 |
93.5 |
92.7 |
93.1 |
93.7 |
[RefCOCO][refcoco] (testA) |
75.7 |
77.2 |
76.8 |
78.6 |
79.7 |
79.3 |
[RefCOCO][refcoco] (testB) |
71.0 |
74.2 |
73.9 |
73.5 |
76.2 |
74.8 |
[RefCOCO][refcoco] (val) |
73.4 |
75.9 |
75.0 |
76.3 |
78.2 |
77.3 |
[RefCOCO+][refcoco+] (testA) |
72.7 |
74.7 |
73.6 |
76.1 |
77.7 |
76.6 |
[RefCOCO+][refcoco+] (testB) |
64.2 |
68.4 |
67.1 |
67.0 |
71.1 |
68.6 |
[RefCOCO+][refcoco+] (val) |
68.6 |
72.0 |
70.3 |
72.1 |
74.4 |
72.8 |
[RefCOCOg][refcocog] (test) |
69.0 |
71.9 |
70.7 |
72.7 |
74.8 |
73.7 |
[RefCOCOg][refcocog] (val) |
68.3 |
71.4 |
70.5 |
72.3 |
74.4 |
73.0 |
[ST - VQA][st - vqa] (val) |
61.9 |
64.3 |
65.1 |
80.5 |
82.0 |
81.8 |
[SciCap][scicap] |
165.1 |
159.5 |
156.9 |
183.3 |
177.2 |
172.7 |
[ScienceQA][scienceqa] |
96.1 |
98.2 |
98.2 |
96.2 |
98.5 |
98.6 |
[Screen2Words][screen2words] |
113.3 |
117.8 |
122.8 |
114.0 |
119.1 |
123.4 |
[TallyQA][tallyqa] (complex) |
70.3 |
73.4 |
74.2 |
73.6 |
76.7 |
76.8 |
[TallyQA][tallyqa] (simple) |
81.8 |
83.2 |
83.4 |
85.3 |
86.2 |
85.7 |
[TextCaps][textcaps] |
127.5 |
137.9 |
139.9 |
152.1 |
157.7 |
153.6 |
[TextVQA][textvqa] (val) |
59.6 |
64.0 |
64.7 |
75.2 |
76.6 |
76.2 |
[VATEX][vatex] |
80.8 |
82.7 |
- |
- |
- |
|
📄 License
The license of this model is gemma.
⚠️ Important Note
To access PaliGemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged - in to Hugging Face and click below. Requests are processed immediately.
Property |
Details |
Library Name |
transformers |
Pipeline Tag |
image - text - to - text |
Model page |
PaliGemma |
Resources and technical documentation |
[PaliGemma 2 on Kaggle](https://www.kaggle.com/models/google/paligemma - 2), Responsible Generative AI Toolkit |
Terms of Use |
Terms |
Authors |
Google |