🚀 FG - CLIP:细粒度视觉与文本对齐
FG - CLIP是一个专注于实现细粒度视觉与文本对齐的模型。它通过分阶段的训练方式,利用全局和区域级别的图像 - 文本对,不断优化对齐效果,在图像识别和文本匹配等任务中具有出色的表现。
🚀 快速开始
加载模型
import torch
from PIL import Image
from transformers import (
AutoImageProcessor,
AutoTokenizer,
AutoModelForCausalLM,
)
model_root = "qihoo360/fg-clip-base"
image_size=224
model = AutoModelForCausalLM.from_pretrained(model_root,trust_remote_code=True).cuda()
device = model.device
tokenizer = AutoTokenizer.from_pretrained(model_root)
image_processor = AutoImageProcessor.from_pretrained(model_root)
检索
img_root = "FG-CLIP/use_imgs/cat_dfclor.jpg"
image = Image.open(img_root).convert("RGB")
image = image.resize((image_size,image_size))
image_input = image_processor.preprocess(image, return_tensors='pt')['pixel_values'].to(device)
# 注意 短描述:max_length=77 且 walk_short_pos=True
walk_short_pos = True
captions=["a photo of a cat", "a photo of a dog"]
caption_input = torch.tensor(tokenizer(captions, max_length=77, padding="max_length", truncation=True).input_ids, dtype=torch.long, device=device)
# 注意 长描述:max_length=248 且 walk_short_pos=False
# ......
with torch.no_grad():
image_feature = model.get_image_features(image_input)
text_feature = model.get_text_features(caption_input,walk_short_pos=walk_short_pos)
image_feature = image_feature / image_feature.norm(p=2, dim=-1, keepdim=True)
text_feature = text_feature / text_feature.norm(p=2, dim=-1, keepdim=True)
logits_per_image = image_feature @ text_feature.T
logits_per_image = model.logit_scale.exp() * logits_per_image
probs = logits_per_image.softmax(dim=1)
print(probs)
# [[9.9997e-01, 3.3485e-05]]
密集特征效果展示
import math
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
img_root = "FG-CLIP/use_imgs/cat_dfclor.jpg"
image = Image.open(img_root).convert("RGB")
image = image.resize((image_size,image_size))
image_input = image_processor.preprocess(image, return_tensors='pt')['pixel_values'].to(device)
with torch.no_grad():
dense_image_feature = model.get_image_dense_features(image_input)
captions = ["white cat"]
caption_input = torch.tensor(tokenizer(captions, max_length=77, padding="max_length", truncation=True).input_ids, dtype=torch.long, device=device)
text_feature = model.get_text_features(caption_input,walk_short_pos=True)
text_feature = text_feature / text_feature.norm(p=2, dim=-1, keepdim=True)
dense_image_feature = dense_image_feature / dense_image_feature.norm(p=2, dim=-1, keepdim=True)
similarity = dense_image_feature.squeeze() @ text_feature.squeeze().T
similarity = similarity.cpu().numpy()
patch_size = int(math.sqrt(similarity.shape[0]))
original_shape = (patch_size, patch_size)
show_image = similarity.reshape(original_shape)
plt.figure(figsize=(6, 6))
plt.imshow(show_image)
plt.title('similarity Visualization')
plt.axis('off')
plt.savefig("FG-CLIP/use_imgs/FGCLIP_dfcolor_cat.png")
✨ 主要特性
FG - CLIP的训练分为两个阶段:第一阶段利用全局级别的图像 - 文本对实现初始的细粒度对齐;第二阶段则补充额外的区域级描述,包括详细的区域描述以及正/负区域说明,以进一步优化对齐效果。
📚 详细文档
FG - CLIP:细粒度视觉与文本对齐
Chunyu Xie*, Bin Wang*, Fanjing Kong, Jincheng Li, Dawei Liang, Gengshen Zhang, Dawei Leng‚Ć, Yuhui Yin(*同等贡献,‚úù通讯作者)

📄 许可证
本项目使用了一些数据集和检查点,这些都遵循各自的原始许可证。用户必须遵守这些原始许可证的所有条款和条件。
本项目内容本身遵循 Apache许可证2.0。
📚 引用
如果您发现FG - CLIP对您的研究和应用有帮助,请使用以下BibTeX进行引用:
@article{xie2025fgclip,
title={FG-CLIP: Fine-Grained Visual and Textual Alignment},
author={Chunyu Xie and Bin Wang and Fanjing Kong and Jincheng Li and Dawei Liang and Gengshen Zhang and Dawei Leng and Yuhui Yin},
year={2025},
eprint={2505.05071},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2505.05071},
}