🚀 InternViT-6B-448px-V1-2
InternViT-6B-448px-V1-2是全新发布的权重版本。在InternVL 1.2更新中,涉及对InternViT-6B模型的持续预训练。具体而言,将InternViT-6B的分辨率从224提升至448,并与[Nous-Hermes-2-Yi-34B]((https://huggingface.co/NousResearch/Nous-Hermes-2-Yi-34B)进行集成。
[📂 GitHub] [📜 InternVL 1.0] [📜 InternVL 1.5] [📜 Mini-InternVL] [📜 InternVL 2.5]
[🆕 Blog] [🗨️ Chat Demo] [🤗 HF Demo] [🚀 Quick Start] [📖 Documents]
✨ 主要特性
- 分辨率提升:将InternViT-6B的分辨率从224提高到448,增强了模型处理高分辨率图像的能力。
- 集成优化:与[Nous-Hermes-2-Yi-34B]((https://huggingface.co/NousResearch/Nous-Hermes-2-Yi-34B)集成,提升了模型的性能。
- 数据增强:在通用图像描述数据集的基础上,加入额外的OCR数据,提升了模型的OCR能力。
- 参数优化:通过丢弃最后3个块,将模型参数从59亿减少到55亿,节省了GPU内存。
📦 安装指南
文档未提及具体安装步骤,暂不提供。
💻 使用示例
基础用法
import torch
from PIL import Image
from transformers import AutoModel, CLIPImageProcessor
model = AutoModel.from_pretrained(
'OpenGVLab/InternViT-6B-448px-V1-2',
torch_dtype=torch.bfloat16,
low_cpu_mem_usage=True,
trust_remote_code=True).cuda().eval()
image = Image.open('./examples/image1.jpg').convert('RGB')
image_processor = CLIPImageProcessor.from_pretrained('OpenGVLab/InternViT-6B-448px-V1-2')
pixel_values = image_processor(images=image, return_tensors='pt').pixel_values
pixel_values = pixel_values.to(torch.bfloat16).cuda()
outputs = model(pixel_values)
📚 详细文档
模型详情
属性 |
详情 |
模型类型 |
视觉基础模型,特征骨干网络 |
模型参数 |
参数量(M):5540(丢弃最后3个块);图像尺寸:448 x 448 |
预训练数据集 |
LAION-en、LAION-zh、COYO、GRIT、COCO、TextCaps、Objects365、OpenImages、All-Seeing、Wukong-OCR、LaionCOCO-OCR等OCR相关数据集 |
注意事项
InternViT-6B原本有48个块,经实验发现,使用倒数第4个块之后的输出对多模态大语言模型(MLLM)效果最佳。为了方便使用和节省GPU内存,直接丢弃了最后3个块。现在,模型仅剩下45个块,参数数量从59亿减少到55亿。因此,如果要基于此模型构建MLLM,请使用最后一层的特征。
⚠️ 重要提示
根据经验,InternViT V2.5系列更适合用于构建多模态大语言模型,而非传统的计算机视觉任务。
📄 许可证
本项目采用MIT许可证发布。
🔖 引用
如果您在研究中发现本项目有用,请考虑引用以下文献:
@article{chen2024expanding,
title={Expanding Performance Boundaries of Open-Source Multimodal Models with Model, Data, and Test-Time Scaling},
author={Chen, Zhe and Wang, Weiyun and Cao, Yue and Liu, Yangzhou and Gao, Zhangwei and Cui, Erfei and Zhu, Jinguo and Ye, Shenglong and Tian, Hao and Liu, Zhaoyang and others},
journal={arXiv preprint arXiv:2412.05271},
year={2024}
}
@article{gao2024mini,
title={Mini-internvl: A flexible-transfer pocket multimodal model with 5\% parameters and 90\% performance},
author={Gao, Zhangwei and Chen, Zhe and Cui, Erfei and Ren, Yiming and Wang, Weiyun and Zhu, Jinguo and Tian, Hao and Ye, Shenglong and He, Junjun and Zhu, Xizhou and others},
journal={arXiv preprint arXiv:2410.16261},
year={2024}
}
@article{chen2024far,
title={How Far Are We to GPT-4V? Closing the Gap to Commercial Multimodal Models with Open-Source Suites},
author={Chen, Zhe and Wang, Weiyun and Tian, Hao and Ye, Shenglong and Gao, Zhangwei and Cui, Erfei and Tong, Wenwen and Hu, Kongzhi and Luo, Jiapeng and Ma, Zheng and others},
journal={arXiv preprint arXiv:2404.16821},
year={2024}
}
@inproceedings{chen2024internvl,
title={Internvl: Scaling up vision foundation models and aligning for generic visual-linguistic tasks},
author={Chen, Zhe and Wu, Jiannan and Wang, Wenhai and Su, Weijie and Chen, Guo and Xing, Sen and Zhong, Muyan and Zhang, Qinglong and Zhu, Xizhou and Lu, Lewei and others},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={24185--24198},
year={2024}
}