Pixtral 12b
P
Pixtral 12b
由 saujasv 开发
Pixtral是一个基于Mistral架构的多模态模型,能够处理图像和文本输入,生成文本输出。
下载量 2,168
发布时间 : 11/7/2024
模型简介
Pixtral是一个Transformers兼容的图像文本到文本转换模型,支持多图像输入和复杂指令处理,适用于图像描述等场景。
模型特点
多图像处理
支持同时处理多个图像输入,并能理解图像间的关联性
复杂指令理解
能够理解包含图像和文本混合输入的复杂指令
详细描述生成
生成内容丰富、结构清晰的图像描述
模型能力
图像内容描述
多模态对话
场景理解
图像关联分析
使用案例
内容生成
图像描述生成
为单张或多张图像生成详细的内容描述
生成包含场景元素、物体特征和上下文关系的结构化描述
辅助工具
视觉问答
回答关于图像内容的自然语言问题
提供准确且符合图像内容的回答
🚀 图像文本转换模型
本项目提供与Transformers兼容的pixtral检查点,可实现图像文本到文本的转换功能,为图像描述等应用场景提供支持。
🚀 快速开始
Transformers兼容的pixtral检查点。请确保从源代码安装或等待v4.45版本发布!
💻 使用示例
基础用法
from PIL import Image
from transformers import AutoProcessor, LlavaForConditionalGeneration
model_id = "mistral-community/pixtral-12b"
model = LlavaForConditionalGeneration.from_pretrained(model_id)
processor = AutoProcessor.from_pretrained(model_id)
IMG_URLS = [
"https://picsum.photos/id/237/400/300",
"https://picsum.photos/id/231/200/300",
"https://picsum.photos/id/27/500/500",
"https://picsum.photos/id/17/150/600",
]
PROMPT = "<s>[INST]Describe the images.\n[IMG][IMG][IMG][IMG][/INST]"
inputs = processor(text=PROMPT, images=IMG_URLS, return_tensors="pt").to("cuda")
generate_ids = model.generate(**inputs, max_new_tokens=500)
output = processor.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
运行上述代码后,你应该会得到类似于以下内容的输出:
"""
Describe the images.
Sure, let's break down each image description:
1. **Image 1:**
- **Description:** A black dog with a glossy coat is sitting on a wooden floor. The dog has a focused expression and is looking directly at the camera.
- **Details:** The wooden floor has a rustic appearance with visible wood grain patterns. The dog's eyes are a striking color, possibly brown or amber, which contrasts with its black fur.
2. **Image 2:**
- **Description:** A scenic view of a mountainous landscape with a winding road cutting through it. The road is surrounded by lush green vegetation and leads to a distant valley.
- **Details:** The mountains are rugged with steep slopes, and the sky is clear, indicating good weather. The winding road adds a sense of depth and perspective to the image.
3. **Image 3:**
- **Description:** A beach scene with waves crashing against the shore. There are several people in the water and on the beach, enjoying the waves and the sunset.
- **Details:** The waves are powerful, creating a dynamic and lively atmosphere. The sky is painted with hues of orange and pink from the setting sun, adding a warm glow to the scene.
4. **Image 4:**
- **Description:** A garden path leading to a large tree with a bench underneath it. The path is bordered by well-maintained grass and flowers.
- **Details:** The path is made of small stones or gravel, and the tree provides a shaded area with the bench invitingly placed beneath it. The surrounding area is lush and green, suggesting a well-kept garden.
Each image captures a different scene, from a close-up of a dog to expansive natural landscapes, showcasing various elements of nature and human interaction with it.
"""
高级用法
你还可以使用聊天模板为Pixtral格式化聊天历史记录。请确保传递给processor
的images
参数中的图像顺序与它们在聊天中出现的顺序一致,以便模型理解每个图像的位置。
from PIL import Image
from transformers import AutoProcessor, LlavaForConditionalGeneration
model_id = "mistral-community/pixtral-12b"
model = LlavaForConditionalGeneration.from_pretrained(model_id)
processor = AutoProcessor.from_pretrained(model_id)
url_dog = "https://picsum.photos/id/237/200/300"
url_mountain = "https://picsum.photos/seed/picsum/200/300"
chat = [
{
"role": "user", "content": [
{"type": "text", "content": "Can this animal"},
{"type": "image"},
{"type": "text", "content": "live here?"},
{"type": "image"}
]
}
]
prompt = processor.apply_chat_template(chat)
inputs = processor(text=prompt, images=[url_dog, url_mountain], return_tensors="pt").to(model.device)
generate_ids = model.generate(**inputs, max_new_tokens=500)
output = processor.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
运行上述代码后,你应该会得到类似于以下内容的输出:
Can this animallive here?Certainly! Here are some details about the images you provided:
### First Image
- **Description**: The image shows a black dog lying on a wooden surface. The dog has a curious expression with its head tilted slightly to one side.
- **Details**: The dog appears to be a young puppy with soft, shiny fur. Its eyes are wide and alert, and it has a playful demeanor.
- **Context**: This image could be used to illustrate a pet-friendly environment or to showcase the dog's personality.
### Second Image
- **Description**: The image depicts a serene landscape with a snow-covered hill in the foreground. The sky is painted with soft hues of pink, orange, and purple, indicating a sunrise or sunset.
- **Details**: The hill is covered in a blanket of pristine white snow, and the horizon meets the sky in a gentle curve. The scene is calm and peaceful.
- **Context**: This image could be used to represent tranquility, natural beauty, or a winter wonderland.
### Combined Context
If you're asking whether the dog can "live here," referring to the snowy landscape, it would depend on the breed and its tolerance to cold weather. Some breeds, like Huskies or Saint Bernards, are well-adapted to cold environments, while others might struggle. The dog in the first image appears to be a breed that might prefer warmer climates.
Would you like more information on any specific aspect?
⚠️ 重要提示
虽然输入中的间距看起来可能被打乱了,但这是因为我们为了显示而跳过了特殊标记。实际上,"Can this animal" 和 "live here" 是由图像标记正确分隔的。你可以尝试包含特殊标记进行解码,以查看模型实际接收到的内容!
Clip Vit Large Patch14
CLIP是由OpenAI开发的视觉-语言模型,通过对比学习将图像和文本映射到共享的嵌入空间,支持零样本图像分类
图像生成文本
C
openai
44.7M
1,710
Clip Vit Base Patch32
CLIP是由OpenAI开发的多模态模型,能够理解图像和文本之间的关系,支持零样本图像分类任务。
图像生成文本
C
openai
14.0M
666
Siglip So400m Patch14 384
Apache-2.0
SigLIP是基于WebLi数据集预训练的视觉语言模型,采用改进的sigmoid损失函数,优化了图像-文本匹配任务。
图像生成文本
Transformers

S
google
6.1M
526
Clip Vit Base Patch16
CLIP是由OpenAI开发的多模态模型,通过对比学习将图像和文本映射到共享的嵌入空间,实现零样本图像分类能力。
图像生成文本
C
openai
4.6M
119
Blip Image Captioning Base
Bsd-3-clause
BLIP是一个先进的视觉-语言预训练模型,擅长图像描述生成任务,支持条件式和非条件式文本生成。
图像生成文本
Transformers

B
Salesforce
2.8M
688
Blip Image Captioning Large
Bsd-3-clause
BLIP是一个统一的视觉-语言预训练框架,擅长图像描述生成任务,支持条件式和无条件式图像描述生成。
图像生成文本
Transformers

B
Salesforce
2.5M
1,312
Openvla 7b
MIT
OpenVLA 7B是一个基于Open X-Embodiment数据集训练的开源视觉-语言-动作模型,能够根据语言指令和摄像头图像生成机器人动作。
图像生成文本
Transformers 英语

O
openvla
1.7M
108
Llava V1.5 7b
LLaVA 是一款开源多模态聊天机器人,基于 LLaMA/Vicuna 微调,支持图文交互。
图像生成文本
Transformers

L
liuhaotian
1.4M
448
Vit Gpt2 Image Captioning
Apache-2.0
这是一个基于ViT和GPT2架构的图像描述生成模型,能够为输入图像生成自然语言描述。
图像生成文本
Transformers

V
nlpconnect
939.88k
887
Blip2 Opt 2.7b
MIT
BLIP-2是一个视觉语言模型,结合了图像编码器和大型语言模型,用于图像到文本的生成任务。
图像生成文本
Transformers 英语

B
Salesforce
867.78k
359
精选推荐AI模型
Llama 3 Typhoon V1.5x 8b Instruct
专为泰语设计的80亿参数指令模型,性能媲美GPT-3.5-turbo,优化了应用场景、检索增强生成、受限生成和推理任务
大型语言模型
Transformers 支持多种语言

L
scb10x
3,269
16
Cadet Tiny
Openrail
Cadet-Tiny是一个基于SODA数据集训练的超小型对话模型,专为边缘设备推理设计,体积仅为Cosmo-3B模型的2%左右。
对话系统
Transformers 英语

C
ToddGoldfarb
2,691
6
Roberta Base Chinese Extractive Qa
基于RoBERTa架构的中文抽取式问答模型,适用于从给定文本中提取答案的任务。
问答系统 中文
R
uer
2,694
98