🚀 Model card for Pix2Struct - Finetuned on Screen2Words (Captioning a UI screen to describe functionality)
Pix2Struct is an image encoder - text decoder model trained on image - text pairs, suitable for tasks like image captioning and visual question answering.

🚀 Quick Start
Pix2Struct is an image encoder - text decoder model trained on image - text pairs for multiple tasks, including image captioning and visual question answering. It can handle visually - situated language tasks and achieve state - of - the - art results in many scenarios.
✨ Features
- Versatile Task Support: Capable of handling various tasks such as image captioning and visual question answering.
- Rich Pretraining Data: Pretrained by parsing masked web - page screenshots into simplified HTML, leveraging the rich visual elements on the web.
- Flexible Input Representation: Introduces a variable - resolution input representation and a more flexible integration of language and vision inputs.
📦 Installation
Converting from T5x to huggingface
You can use the convert_pix2struct_checkpoint_to_pytorch.py
script as follows:
python convert_pix2struct_checkpoint_to_pytorch.py --t5x_checkpoint_path PATH_TO_T5X_CHECKPOINTS --pytorch_dump_path PATH_TO_SAVE
if you are converting a large model, run:
python convert_pix2struct_checkpoint_to_pytorch.py --t5x_checkpoint_path PATH_TO_T5X_CHECKPOINTS --pytorch_dump_path PATH_TO_SAVE --use-large
Once saved, you can push your converted model with the following snippet:
from transformers import Pix2StructForConditionalGeneration, Pix2StructProcessor
model = Pix2StructForConditionalGeneration.from_pretrained(PATH_TO_SAVE)
processor = Pix2StructProcessor.from_pretrained(PATH_TO_SAVE)
model.push_to_hub("USERNAME/MODEL_NAME")
processor.push_to_hub("USERNAME/MODEL_NAME")
💻 Usage Examples
Basic Usage
The instructions for running the model are exactly the same as the instructions stated on pix2struct-textcaps-base
model.
📚 Documentation
TL;DR
Pix2Struct is an image encoder - text decoder model that is trained on image - text pairs for various tasks, including image captioning and visual question answering. The full list of available models can be found on the Table 1 of the paper:

The abstract of the model states that:
Visually - situated language is ubiquitous—sources range from textbooks with diagrams to web pages with images and tables, to mobile apps with buttons and forms. Perhaps due to this diversity, previous work has typically relied on domain - specific recipes with limited sharing of the underlying data, model architectures, and objectives. We present Pix2Struct, a pretrained image - to - text model for purely visual language understanding, which can be finetuned on tasks containing visually - situated language. Pix2Struct is pretrained by learning to parse masked screenshots of web pages into simplified HTML. The web, with its richness of visual elements cleanly reflected in the HTML structure, provides a large source of pretraining data well suited to the diversity of downstream tasks. Intuitively, this objective subsumes common pretraining signals such as OCR, language modeling, image captioning. In addition to the novel pretraining strategy, we introduce a variable - resolution input representation and a more flexible integration of language and vision inputs, where language prompts such as questions are rendered directly on top of the input image. For the first time, we show that a single pretrained model can achieve state - of - the - art results in six out of nine tasks across four domains: documents, illustrations, user interfaces, and natural images.
📄 License
This model is licensed under the apache - 2.0 license.
👥 Contribution
This model was originally contributed by Kenton Lee, Mandar Joshi et al. and added to the Hugging Face ecosystem by Younes Belkada.
📚 Citation
If you want to cite this work, please consider citing the original paper:
@misc{https://doi.org/10.48550/arxiv.2210.03347,
doi = {10.48550/ARXIV.2210.03347},
url = {https://arxiv.org/abs/2210.03347},
author = {Lee, Kenton and Joshi, Mandar and Turc, Iulia and Hu, Hexiang and Liu, Fangyu and Eisenschlos, Julian and Khandelwal, Urvashi and Shaw, Peter and Chang, Ming - Wei and Toutanova, Kristina},
keywords = {Computation and Language (cs.CL), Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Pix2Struct: Screenshot Parsing as Pretraining for Visual Language Understanding},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}