🚀 迷你图像字幕生成模型
这是一个基于bert-mini
和vit-small
的图像字幕生成模型,模型大小仅 130MB!它在 CPU 上也能实现快速推理。
🚀 快速开始
本模型是一个图像字幕生成模型,基于bert-mini
和vit-small
构建,能快速为图像生成描述。
from transformers import AutoTokenizer, AutoImageProcessor, VisionEncoderDecoderModel
import requests, time
from PIL import Image
model_path = "cnmoro/mini-image-captioning"
model = VisionEncoderDecoderModel.from_pretrained(model_path)
tokenizer = AutoTokenizer.from_pretrained(model_path)
image_processor = AutoImageProcessor.from_pretrained(model_path)
url = "https://upload.wikimedia.org/wikipedia/commons/thumb/4/47/New_york_times_square-terabass.jpg/800px-New_york_times_square-terabass.jpg"
image = Image.open(requests.get(url, stream=True).raw)
pixel_values = image_processor(image, return_tensors="pt").pixel_values
start = time.time()
generated_ids = model.generate(
pixel_values,
temperature=0.7,
top_p=0.8,
top_k=50,
num_beams=3
)
generated_text = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
end = time.time()
print(generated_text)
print(f"Time taken: {end - start} seconds")
💻 使用示例
基础用法
from transformers import AutoTokenizer, AutoImageProcessor, VisionEncoderDecoderModel
import requests, time
from PIL import Image
model_path = "cnmoro/mini-image-captioning"
model = VisionEncoderDecoderModel.from_pretrained(model_path)
tokenizer = AutoTokenizer.from_pretrained(model_path)
image_processor = AutoImageProcessor.from_pretrained(model_path)
url = "https://upload.wikimedia.org/wikipedia/commons/thumb/4/47/New_york_times_square-terabass.jpg/800px-New_york_times_square-terabass.jpg"
image = Image.open(requests.get(url, stream=True).raw)
pixel_values = image_processor(image, return_tensors="pt").pixel_values
start = time.time()
generated_ids = model.generate(
pixel_values,
temperature=0.7,
top_p=0.8,
top_k=50,
num_beams=3
)
generated_text = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
end = time.time()
print(generated_text)
print(f"Time taken: {end - start} seconds")
高级用法
from transformers import AutoTokenizer, AutoImageProcessor, VisionEncoderDecoderModel
import requests, time
from PIL import Image
model_path = "cnmoro/mini-image-captioning"
model = VisionEncoderDecoderModel.from_pretrained(model_path)
tokenizer = AutoTokenizer.from_pretrained(model_path)
image_processor = AutoImageProcessor.from_pretrained(model_path)
url = "https://upload.wikimedia.org/wikipedia/commons/thumb/4/47/New_york_times_square-terabass.jpg/800px-New_york_times_square-terabass.jpg"
image = Image.open(requests.get(url, stream=True).raw)
pixel_values = image_processor(image, return_tensors="pt").pixel_values
start = time.time()
generated_ids = model.generate(
pixel_values,
temperature=0.7,
top_p=0.8,
top_k=50,
num_beams=1
)
generated_text = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
end = time.time()
print(generated_text)
print(f"Time taken: {end - start} seconds")
📄 许可证
本项目采用 Apache-2.0 许可证。
📚 详细文档
属性 |
详情 |
基础模型 |
google/bert_uncased_L-4_H-256_A-4、WinKawaks/vit-small-patch16-224 |
任务类型 |
图像转文本 |
库名称 |
transformers |
标签 |
vit、bert、vision、caption、captioning、image |