Model Overview
Model Features
Model Capabilities
Use Cases
🚀 PaliGemma 2 Model Card
PaliGemma 2 is an updated vision - language model that combines the capabilities of Gemma 2. It takes both image and text as input and generates text output, supporting multiple languages. It's suitable for a wide range of vision - language tasks.
🚀 Quick Start
To access PaliGemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged - in to Hugging Face and click the "Acknowledge license" button below. Requests are processed immediately.
Model page: PaliGemma
Transformers PaliGemma 2 3B weights are fine - tuned on a mixture of academic tasks using 448x448 input images. PaliGemma 2 mix checkpoints are fine - tuned on diverse tasks and ready for out - of - the - box use, while pt checkpoints are pre - trained for further fine - tuning. These tasks include short and long captioning, optical character recognition, question answering, object detection and segmentation, etc. The model is available in the bfloat16
format for research purposes only.
Resources and technical documentation:
Terms of Use: Terms
Authors: Google
✨ Features
Model Information
Model Summary
PaliGemma 2 is an update of the PaliGemma vision - language model (VLM), incorporating the capabilities of the Gemma 2 models. Inspired by PaLI - 3, it's based on open components like the SigLIP vision model and Gemma 2 language models. It accepts both image and text as input and generates text output, supporting multiple languages. It's designed for class - leading fine - tune performance on various vision - language tasks such as image and short video captioning, visual question answering, text reading, object detection, and object segmentation.
Model Architecture
PaliGemma 2 is composed of a Transformer decoder and a Vision Transformer image encoder. The text decoder is initialized from Gemma 2 in 2B, 9B, and 27B parameter sizes. The image encoder is initialized from SigLIP - So400m/14. Similar to the original PaliGemma model, PaliGemma 2 is trained following the PaLI - 3 recipes.
Inputs and Outputs
- Input: Image and text string, e.g., a prompt to caption the image or a question.
- Output: Generated text in response to the input, such as an image caption, an answer to a question, a list of object bounding box coordinates, or segmentation codewords.
Model Data
Pre - train Datasets
PaliGemma 2 is pre - trained on the following mixture of datasets:
- WebLI: WebLI (Web Language Image) is a web - scale multilingual image - text dataset built from the public web. Various WebLI splits are used to acquire versatile model capabilities like visual semantic understanding, object localization, visually - situated text understanding, and multilinguality.
- CC3M - 35L: Curated English image - alt_text pairs from webpages (Sharma et al., 2018). We used the Google Cloud Translation API to translate into 34 additional languages.
- VQ²A - CC3M - 35L/VQG - CC3M - 35L: A subset of VQ2A - CC3M (Changpinyo et al., 2022a), translated into the same 34 additional languages as CC3M - 35L using the Google Cloud Translation API.
- OpenImages: Detection and object - aware questions and answers (Piergiovanni et al. 2022) generated by handcrafted rules on the OpenImages dataset.
- WIT: Images and texts collected from Wikipedia (Srinivasan et al., 2021).
PaliGemma 2 is based on Gemma 2, and you can find information on the pre - training datasets for Gemma 2 in the Gemma 2 model card.
Data Responsibility Filtering
The following filters are applied to WebLI to train PaliGemma 2 on safe and responsible data:
- Pornographic image filtering: This filter removes images deemed pornographic.
- Text safety filtering: We identify and filter out images paired with unsafe text. Unsafe text is any text containing or about child sexual abuse imagery (CSAI), pornography, vulgarities, or offensive content.
- Text toxicity filtering: We use the Perspective API to identify and filter out images paired with text deemed insulting, obscene, hateful, or otherwise toxic.
- Text personal information filtering: We filter certain personal information and other sensitive data using the Cloud Data Loss Prevention (DLP) API to protect individual privacy. Identifiers such as social security numbers and other sensitive information types are removed.
- Additional methods: Filtering based on content quality and safety in line with our policies and practices.
📦 Installation
Not provided in the original README, so this section is skipped.
💻 Usage Examples
Basic Usage
from transformers import (
PaliGemmaProcessor,
PaliGemmaForConditionalGeneration,
)
from transformers.image_utils import load_image
import torch
model_id = "google/paligemma2-3b-mix-448"
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg"
image = load_image(url)
model = PaliGemmaForConditionalGeneration.from_pretrained(model_id, torch_dtype=torch.bfloat16, device_map="auto").eval()
processor = PaliGemmaProcessor.from_pretrained(model_id)
prompt = "describe en"
model_inputs = processor(text=prompt, images=image, return_tensors="pt").to(torch.bfloat16).to(model.device)
input_len = model_inputs["input_ids"].shape[-1]
with torch.inference_mode():
generation = model.generate(**model_inputs, max_new_tokens=100, do_sample=False)
generation = generation[0][input_len:]
decoded = processor.decode(generation, skip_special_tokens=True)
print(decoded)
You can use the following prompt templates to perform different tasks:
"cap {lang}"
: Raw short caption (from WebLI - alt)"caption {lang}"
: Nice, COCO - like short captions"describe {lang}"
: Longer, more descriptive captions"ocr"
: Optical character recognition"answer {lang} {question}"
: Question answering about the image contents"question {lang} {answer}"
: Question generation for a given answer"detect {object} ; {object}"
: Locate listed objects in an image and return the bounding boxes for those objects"segment {object}"
: Locate the area occupied by the object in an image to create an image segmentation for that object
Here is a notebook that showcases fine - tuning PaliGemma 2.
🔧 Technical Details
Hardware
PaliGemma 2 was trained using the latest generation of Tensor Processing Unit (TPU) hardware (TPUv5e).
Software
Training was completed using JAX, Flax, TFDS, and big_vision
.
JAX allows researchers to take advantage of the latest generation of hardware, including TPUs, for faster and more efficient training of large models.
TFDS is used to access datasets, and Flax is used for model architecture. The PaliGemma 2 fine - tune code and inference code are released in the big_vision
GitHub repository.
📚 Documentation
Benchmark Results
To verify the transferability of PaliGemma 2 to various academic tasks, we fine - tune the pretrained models on each task. We report results on different resolutions to show which tasks benefit from increased resolution and which benefit from larger models. Importantly, none of these tasks or datasets are part of the pretraining data mixture, and their images are explicitly removed from the web - scale pre - training data.
PaliGemma 2 results by model resolution and size
Benchmark | 224 - 3B | 224 - 10B | 224 - 28B | 448 - 3B | 448 - 10B | 448 - 28B |
---|---|---|---|---|---|---|
[AI2D][ai2d] | 74.7 | 83.1 | 83.2 | 76.0 | 84.4 | 84.6 |
[AOKVQA - DA][aokvqa - da] (val) | 64.2 | 68.9 | 70.2 | 67.9 | 70.8 | 71.2 |
[AOKVQA - MC][aokvqa - mc] (val) | 79.7 | 83.7 | 84.7 | 82.5 | 85.9 | 87.0 |
[ActivityNet - CAP][anet - cap] | 34.2 | 35.9 | - | - | - | - |
[ActivityNet - QA][anet - qa] | 51.3 | 53.2 | - | - | - | - |
[COCO - 35L][coco - 35l] (avg34) | 113.9 | 115.8 | 116.5 | 115.8 | 117.2 | 117.2 |
[COCO - 35L][coco - 35l] (en) | 138.4 | 140.8 | 142.4 | 140.4 | 142.4 | 142.3 |
[COCOcap][coco - cap] | 141.3 | 143.7 | 144.0 | 143.4 | 145.0 | 145.2 |
[ChartQA][chartqa] (aug) | 74.4 | 74.2 | 68.9 | 89.2 | 90.1 | 85.1 |
[ChartQA][chartqa] (human) | 42.0 | 48.4 | 46.8 | 54.0 | 66.4 | 61.3 |
[CountBenchQA][countbenchqa] | 81.0 | 84.0 | 86.4 | 82.0 | 85.3 | 87.4 |
[DocVQA][docvqa] (val) | 39.9 | 43.9 | 44.9 | 73.6 | 76.6 | 76.1 |
[GQA][gqa] | 66.2 | 67.2 | 67.3 | 68.1 | 68.3 | 68.3 |
[InfoVQA][info - vqa] (val) | 25.2 | 33.6 | 36.4 | 37.5 | 47.8 | 46.7 |
[MARVL][marvl] (avg5) | 83.5 | 89.5 | 90.6 | 82.7 | 89.1 | 89.7 |
[MSRVTT - CAP][msrvtt] | 68.5 | 72.1 | - | - | - | - |
[MSRVTT - QA][msrvtt] | 50.5 | 51.9 | - | - | - | - |
[MSVD - QA][msvd - qa] | 61.1 | 62.5 | - | - | - | - |
[NLVR2][nlvr2] | 91.4 | 93.9 | 94.2 | 91.6 | 93.7 | 94.1 |
[NoCaps][nocaps] | 123.1 | 126.3 | 127.1 | 123.5 | 126.9 | 127.0 |
[OCR - VQA][ocr - vqa] | 73.4 | 74.7 | 75.3 | 75.7 | 76.3 | 76.6 |
[OKVQA][okvqa] | 64.2 | 68.0 | 71.2 | 64.1 | 68.6 | 70.6 |
[RSVQA - hr][rsvqa - hr] (test) | 92.7 | 92.6 | 92.7 | 92.8 | 92.8 | 92.8 |
[RSVQA - hr][rsvqa - hr] (test2) | 90.9 | 90.8 | 90.9 | 90.7 | 90.7 | 90.8 |
[RSVQA - lr][rsvqa - lr] | 93.0 | 92.8 | 93.5 | 92.7 | 93.1 | 93.7 |
[RefCOCO][refcoco] (testA) | 75.7 | 77.2 | 76.8 | 78.6 | 79.7 | 79.3 |
[RefCOCO][refcoco] (testB) | 71.0 | 74.2 | 73.9 | 73.5 | 76.2 | 74.8 |
[RefCOCO][refcoco] (val) | 73.4 | 75.9 | 75.0 | 76.3 | 78.2 | 77.3 |
[RefCOCO+][refcoco+] (testA) | 72.7 | 74.7 | 73.6 | 76.1 | 77.7 | 76.6 |
[RefCOCO+][refcoco+] (testB) | 64.2 | 68.4 | 67.1 | 67.0 | 71.1 | 68.6 |
[RefCOCO+][refcoco+] (val) | 68.6 | 72.0 | 70.3 | 72.1 | 74.4 | 72.8 |
[RefCOCOg][refcocog] (test) | 69.0 | 71.9 | 70.7 | 72.7 | 74.8 | 73.7 |
[RefCOCOg][refcocog] (val) | 68.3 | 71.4 | 70.5 | 72.3 | 74.4 | 73.0 |
[ST - VQA][st - vqa] (val) | 61.9 | 64.3 | 65.1 | 80.5 | 82.0 | 81.8 |
[SciCap][scicap] | 165.1 | 159.5 | 156.9 | 183.3 | 177.2 | 172.7 |
[ScienceQA][scienceqa] | 96.1 | 98.2 | 98.2 | 96.2 | 98.5 | 98.6 |
[Screen2Words][screen2words] | 113.3 | 117.8 | 122.8 | 114.0 | 119.1 | 123.4 |
[TallyQA][tallyqa] (complex) | 70.3 | 73.4 | 74.2 | 73.6 | 76.7 | 76.8 |
[TallyQA][tallyqa] (simple) | 81.8 | 83.2 | 83.4 | 85 |
📄 License
The license for this project is gemma.






