Model Overview
Model Features
Model Capabilities
Use Cases
🚀 PaliGemma 2 model card
PaliGemma 2 is a vision - language model that combines the capabilities of Gemma 2, taking both images and text as input and generating text output, suitable for a variety of vision - language tasks.
🚀 Quick Start
Access PaliGemma on Hugging Face
To access PaliGemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged - in to Hugging Face and click below. Requests are processed immediately.
Downloading Model Weights
First, authenticate using the Hugging Face CLI:
huggingface-cli login
Use the following command to download the model weights:
huggingface-cli download --local-dir models google/paligemma2-3b-mix-224-jax
This will download the weights to the models
directory.
Resources and technical documentation:
Terms of Use:
Authors:
✨ Features
Model information
Model summary
PaliGemma 2 is an update of the PaliGemma vision - language model (VLM) which incorporates the capabilities of the Gemma 2 models. The PaliGemma family of models is inspired by PaLI - 3 and based on open components such as the SigLIP vision model and Gemma 2 language models. It takes both image and text as input and generates text as output, supporting multiple languages. It is designed for class - leading fine - tune performance on a wide range of vision - language tasks such as image and short video caption, visual question answering, text reading, object detection and object segmentation.
Model architecture
PaliGemma 2 is the composition of a Transformer decoder and a Vision Transformer image encoder. The text decoder is initialized from Gemma 2 in the 2B, 9B, and 27B parameter sizes. The image encoder is initialized from SigLIP - So400m/14. Similar to the original PaliGemma model, PaliGemma 2 is trained following the PaLI - 3 recipes.
Inputs and outputs
- Input: Image and text string, such as a prompt to caption the image, or a question.
- Output: Generated text in response to the input, such as a caption of the image, an answer to a question, a list of object bounding box coordinates, or segmentation codewords.
Model data
Pre - train datasets
PaliGemma 2 is pre - trained on the following mixture of datasets:
- WebLI: WebLI (Web Language Image) is a web - scale multilingual image - text dataset built from the public web. A wide range of WebLI splits are used to acquire versatile model capabilities, such as visual semantic understanding, object localization, visually - situated text understanding, and multilinguality.
- CC3M - 35L: Curated English image - alt_text pairs from webpages ([Sharma et al., 2018](https://aclanthology.org/P18 - 1238/)). We used the Google Cloud Translation API to translate into 34 additional languages.
- VQ²A - CC3M - 35L/VQG - CC3M - 35L: A subset of VQ2A - CC3M ([Changpinyo et al., 2022a](https://aclanthology.org/2022.naacl - main.142/)), translated into the same additional 34 languages as CC3M - 35L, using the Google Cloud Translation API.
- OpenImages: Detection and object - aware questions and answers (Piergiovanni et al. 2022) generated by handcrafted rules on the OpenImages dataset.
- WIT: Images and texts collected from Wikipedia (Srinivasan et al., 2021).
PaliGemma 2 is based on Gemma 2, and you can find information on the pre - training datasets for Gemma 2 in the Gemma 2 model card.
Data responsibility filtering
The following filters are applied to WebLI, with the goal of training PaliGemma 2 on safe and responsible data:
- Pornographic image filtering: This filter removes images deemed to be of pornographic nature.
- Text safety filtering: We identify and filter out images that are paired with unsafe text. Unsafe text is any text deemed to contain or be about child sexual abuse imagery (CSAI), pornography, vulgarities, or is otherwise offensive.
- Text toxicity filtering: We further use the Perspective API to identify and filter out images that are paired with text deemed insulting, obscene, hateful or otherwise toxic.
- Text personal information filtering: We filtered certain personal information and other sensitive data using the Cloud Data Loss Prevention (DLP) API to protect the privacy of individuals. Identifiers such as social security numbers and [other sensitive information types] were removed.
- Additional methods: Filtering based on content quality and safety in line with our policies and practices.
[other sensitive information types]: https://cloud.google.com/sensitive-data-protection/docs/high - sensitivity - infotypes - reference?_gl=1jg604m_gaODk5MzA3ODQyLjE3MTAzMzQ3NTk._ga_WH2QY8WWF5*MTcxMDUxNTkxMS4yLjEuMTcxMDUxNjA2NC4wLjAuMA..&_ga=2.172110058.-899307842.1710334759
Implementation information
Hardware
PaliGemma 2 was trained using the latest generation of Tensor Processing Unit (TPU) hardware (TPUv5e).
Software
Training was completed using JAX, Flax, TFDS and [big_vision
](https://github.com/google - research/big_vision).
JAX allows researchers to take advantage of the latest generation of hardware, including TPUs, for faster and more efficient training of large models.
TFDS is used to access datasets and Flax is used for model architecture. The PaliGemma 2 fine - tune code and inference code are released in the big_vision
GitHub repository.
Evaluation information
Benchmark results
In order to verify the transferability of PaliGemma 2 to a wide variety of academic tasks, we fine - tune the pretrained models on each task. We report results on different resolutions to provide an impression of which tasks benefit from increased resolution and which tasks benefit from larger models. Importantly, none of these tasks or datasets are part of the pretraining data mixture, and their images are explicitly removed from the web - scale pre - training data.
PaliGemma 2 results by model resolution and size
Benchmark | 224 - 3B | 224 - 10B | 224 - 28B | 448 - 3B | 448 - 10B | 448 - 28B |
---|---|---|---|---|---|---|
[AI2D][ai2d] | 74.7 | 83.1 | 83.2 | 76.0 | 84.4 | 84.6 |
[AOKVQA - DA][aokvqa - da] (val) | 64.2 | 68.9 | 70.2 | 67.9 | 70.8 | 71.2 |
[AOKVQA - MC][aokvqa - mc] (val) | 79.7 | 83.7 | 84.7 | 82.5 | 85.9 | 87.0 |
[ActivityNet - CAP][anet - cap] | 34.2 | 35.9 | - | - | - | - |
[ActivityNet - QA][anet - qa] | 51.3 | 53.2 | - | - | - | - |
[COCO - 35L][coco - 35l] (avg34) | 113.9 | 115.8 | 116.5 | 115.8 | 117.2 | 117.2 |
[COCO - 35L][coco - 35l] (en) | 138.4 | 140.8 | 142.4 | 140.4 | 142.4 | 142.3 |
[COCOcap][coco - cap] | 141.3 | 143.7 | 144.0 | 143.4 | 145.0 | 145.2 |
[ChartQA][chartqa] (aug) | 74.4 | 74.2 | 68.9 | 89.2 | 90.1 | 85.1 |
[ChartQA][chartqa] (human) | 42.0 | 48.4 | 46.8 | 54.0 | 66.4 | 61.3 |
[CountBenchQA][countbenchqa] | 81.0 | 84.0 | 86.4 | 82.0 | 85.3 | 87.4 |
[DocVQA][docvqa] (val) | 39.9 | 43.9 | 44.9 | 73.6 | 76.6 | 76.1 |
[GQA][gqa] | 66.2 | 67.2 | 67.3 | 68.1 | 68.3 | 68.3 |
[InfoVQA][info - vqa] (val) | 25.2 | 33.6 | 36.4 | 37.5 | 47.8 | 46.7 |
[MARVL][marvl] (avg5) | 83.5 | 89.5 | 90.6 | 82.7 | 89.1 | 89.7 |
[MSRVTT - CAP][msrvtt] | 68.5 | 72.1 | - | - | - | - |
[MSRVTT - QA][msrvtt] | 50.5 | 51.9 | - | - | - | - |
[MSVD - QA][msvd - qa] | 61.1 | 62.5 | - | - | - | - |
[NLVR2][nlvr2] | 91.4 | 93.9 | 94.2 | 91.6 | 93.7 | 94.1 |
[NoCaps][nocaps] | 123.1 | 126.3 | 127.1 | 123.5 | 126.9 | 127.0 |
[OCR - VQA][ocr - vqa] | 73.4 | 74.7 | 75.3 | 75.7 | 76.3 | 76.6 |
[OKVQA][okvqa] | 64.2 | 68.0 | 71.2 | 64.1 | 68.6 | 70.6 |
[RSVQA - hr][rsvqa - hr] (test) | 92.7 | 92.6 | 92.7 | 92.8 | 92.8 | 92.8 |
[RSVQA - hr][rsvqa - hr] (test2) | 90.9 | 90.8 | 90.9 | 90.7 | 90.7 | 90.8 |
[RSVQA - lr][rsvqa - lr] | 93.0 | 92.8 | 93.5 | 92.7 | 93.1 | 93.7 |
[RefCOCO][refcoco] (testA) | 75.7 | 77.2 | 76.8 | 78.6 | 79.7 | 79.3 |
[RefCOCO][refcoco] (testB) | 71.0 | 74.2 | 73.9 | 73.5 | 76.2 | 74.8 |
[RefCOCO][refcoco] (val) | 73.4 | 75.9 | 75.0 | 76.3 | 78.2 | 77.3 |
[RefCOCO+][refcoco+] (testA) | 72.7 | 74.7 | 73.6 | 76.1 | 77.7 | 76.6 |
[RefCOCO+][refcoco+] (testB) | 64.2 | 68.4 | 67.1 | 67.0 | 71.1 | 68.6 |
[RefCOCO+][refcoco+] (val) | 68.6 | 72.0 | 70.3 | 72.1 | 74.4 | 72.8 |
[RefCOCOg][refcocog] (test) | 69.0 | 71.9 | 70.7 | 72.7 | 74.8 | 73.7 |
[RefCOCOg][refcocog] (val) | 68.3 | 71.4 | 70.5 | 72.3 | 74.4 | 73.0 |
[ST - VQA][st - vqa] (val) | 61.9 | 64.3 | 65.1 | 80.5 | 82.0 | 81.8 |
[SciCap][scicap] | 165.1 | 159.5 | 156.9 | 183.3 | 177.2 | 172.7 |
[ScienceQA][scienceqa] | 96.1 | 98.2 | 98.2 | 96.2 | 98.5 | 98.6 |
[Screen2Words][screen2words] | 113.3 | 117.8 | 122.8 | 114.0 | 119.1 | 123.4 |
[TallyQA][tallyqa] (complex) | 70.3 | 73.4 | 74.2 | 73.6 | 76.7 | 76.8 |
[TallyQA][tallyqa] (simple) | 81.8 | 83.2 | 83.4 | 85.3 | 86.2 | 85.7 |
[TextCaps][textcaps] | 127.5 | 137.9 | 139.9 | 152.1 | 157.7 | 153.6 |
[TextVQA][textvqa] (val) | 59.6 | 64.0 | 64.7 | 75.2 | 76.6 | 76.2 |
[VATEX][vatex] | 80.8 | 82.7 | - | - | - | - |
[VQAv2][vqav2] (minival) | 83.0 | 84.3 | 84.5 | 84.8 | 85.8 | 85.8 |
[VizWizVQA][vizwiz - vqa] (val) | 76.4 | 78.1 | 78.7 | 77.5 | 78.6 | 78.9 |
[WidgetCap][widgetcap] | 138.1 | 139.8 | 138.8 | 151.4 | 151.9 | 148.9 |
[XM3600][xm3600] (avg35) | 42.8 | 44.5 | 45.2 | 43.2 | 44.6 | 45.2 |
[XM3600][xm3600] (en) | 79.8 | 80.7 | 81.0 | 80.3 | 81.5 | 81.0 |
[xGQA][xgqa] (avg7) | 58.6 | 61.4 | 61.1 | 60.4 | 62.6 | 62.1 |
Additional Benchmarks
[ICDAR 2015 Incidental][icdar2015 - inc]
Model | Precision | Recall | F1 |
---|---|---|---|
PaliGemma 2 3B | 81.9 | 70.7 | 75.9 |
[Total - Text][total - text]
Model | Precision | Recall | F1 |
---|---|---|---|
PaliGemma 2 3B | 73.8 | 74.5 | 74.2 |
[FinTabNet][fintabnet]
Model | S - TEDS | TEDS | GriTS - Top | GriTS - Con |
---|---|---|---|---|
PaliGemma 2 3B | 99.2 | 98.9 | 99.4 | 99.2 |
[PubTabNet][pubtabnet]
Model | S - TEDS | TEDS | GriTS - Top | GriTS - Con |
---|---|---|---|---|
PaliGemma 2 3B | 97.6 | 97.3 | 97.9 | 97.8 |
[GrandStaff][grandstaff]
Model | CER | LER | SER |
---|---|---|---|
PaliGemma 2 3B | 1.6 | 6.7 | 2.3 |
[PubChem][pubchem]
- PaliGemma 2 3B, Full Match: 94.8
[DOCCI][docci]
Model | avg#char | avg#sent | NES % |
---|---|---|---|
PaliGemma 2 3B | 529 | 7.7 | 28.4 |
PaliGemma 2 10B | 521 | 7.5 | 20.3 |
- avg#char: Average number of characters
- avg#sent: Average number of sentences
- NES: Non entailment sentences
[MIMIC - CXR][mimic - cxr]
Model | CIDEr | BLEU4 | Rouge - L | RadGraph F1 |
---|---|---|---|---|
PaliGemma 2 3B | 19.9 | 14.6 | 31.9 | 28.8 |
PaliGemma 2 10B | 17.4 | 15.0 | 32.4 | 29.5 |
[Visual Spatial Reasoning][vsr]
Model | VSR zeroshot split (test) | VSR random split (test) |
---|---|---|
PaliGemma 2 3B | 74.8 | 81.6 |
PaliGemma 2 10B | 79.8 | 86.8 |
Ethics and safety
Evaluation approach
Our evaluation methods include structured ethics and safety evaluations across relevant content policies, including:
- Human evaluation on prompts covering child safety, content safety and representational harms. See the Gemma model card for more details on evaluation approach, but with image captioning and visual question answering setups.
- Image - to - Text benchmark evaluation: Benchmark against relevant academic datasets such as FairFace Dataset (Karkkainen et al., 2021).
Evaluation results
- The human evaluation results of ethics and safety evaluations are within acceptable thresholds for meeting [internal policies](https://storage.googleapis.com/gweb - uniblog - publish - prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11) for categories such as child safety, content safety and representational harms.
- On top of robust internal evaluations, we also use the Perspective API (threshold of 0.8) to measure toxicity, profanity, and other potential issues in the generated captions for images sourced from the FairFace dataset. We report the maximum and median values observed across subgroups for each of the perceived gender, ethnicity, and age attributes.
Metric | Perceived gender (3B) | Perceived gender (10B) | Perceived gender (28B) | Ethnicity (3B) | Ethnicity (10B) | Ethnicity (28B) | Age group (3B) | Age group (10B) | Age group (28B) |
---|---|---|---|---|---|---|---|---|---|
Maximum | |||||||||
Toxicity | 0.14% | 0.15% | 0.19% | 0.29% | 0.39% | 0.39% | 0.26% | 0.18% | 0.32% |
Identity Attack | 0.04% | 0.02% | 0.02% | 0.13% | 0.06% | 0.06% | 0.06% | 0.03% | 0.06% |
Insult | 0.17% | 0.25% | 0.17% | 0.37% | 0.52% | 0.52% | 0.27% | 0.39% | 0.24% |
Threat | 0.55% | 0.43% | 0.57% | 0.83% | 0.48% | 0.48% | 0.64% | 0.43% | 0.64% |
Profanity | 0.00% | 0.00% | 0.00% | 0.00% | 0.00% | 0.00% | 0.00% | 0.00% | 0.00% |
Median | |||||||||
Toxicity | 0.13% | 0.10% | 0.18% | 0.07% | 0.07% | 0.14% | 0.12% | 0.08% | 0.12% |
Identity Attack | 0.02% | 0.01% | 0.02% | 0.00% | 0.00% | 0.00% | 0.00% | 0.00% | 0.00% |
Insult | 0.15% | 0.23% | 0.14% | 0.14% | 0.17% | 0.13% | 0.09% | 0.18% | 0.16% |
Threat | 0.35% | 0.27% | 0.41% | 0.28% | 0.19% | 0.42% | 0.27% | 0.31% | 0.40% |
Profanity | 0.00% | 0.00% | 0.00% | 0.00% | 0.00% | 0.00% | 0.00% | 0.00% | 0.00% |
Usage and limitations
Intended usage
Open Vision Language Models (VLMs) have a wide range of applications across various industries and domains. The following list of potential uses is not comprehensive. The purpose of this list is to provide contextual information about the possible use - cases that the model creators considered as part of model training and development. Prohibited uses of Gemma models are outlined in the Gemma Prohibited Use Policy.
Fine - tune on specific vision - language task:
- The pre - trained models can be fine - tuned on a wide range of vision - language tasks such as: image captioning, short video caption, visual question answering, text reading, object detection and object segmentation.
- The pre - trained models can be fine - tuned for specific domains such as remote sensing question answering, visual questions from people who are blind, science question answering, describe UI element functionalities.
- The pre - trained models can be fine - tuned for tasks with non - textual outputs such as bounding boxes or segmentation masks.
Vision - language research:
- The pre - trained models and fine - tuned models can serve as a foundation for researchers to experiment with VLM techniques, develop algorithms, and contribute to the advancement of vision - language research.
📄 License
The license for this project is gemma.







