# Multimodal visual reasoning
Llama3.2 11B Vision Instruct INT4 GPTQ
Llama 3.2-Vision is a multimodal large language model developed by Meta, with image reasoning and text generation capabilities, supporting tasks such as visual recognition, image description, and question answering.
Image-to-Text
Transformers Supports Multiple Languages

L
fahadh4ilyas
1,770
1
Phi 3.5 Vision Instruct
MIT
Phi-3.5-vision is a lightweight, cutting-edge open multimodal model supporting 128K context length, focusing on high-quality, reasoning-rich text and visual data.
Image-to-Text
Transformers Other

P
microsoft
397.38k
679
Featured Recommended AI Models