🚀 InternViT-6B-448px-V1-5
InternViT-6B-448px-V1-5是基於InternViT-6B-448px-V1-2強大基礎進行預訓練開發的。此版本將訓練圖像的分辨率從448×448擴展到動態的448×448,基本圖塊大小為448×448,圖塊數量範圍為1到12。此外,還增強了預訓練數據集的數據規模、質量和多樣性,使得該模型具備強大的魯棒性、OCR能力和高分辨率處理能力。
[📂 GitHub] [📜 InternVL 1.0] [📜 InternVL 1.5] [📜 Mini-InternVL] [📜 InternVL 2.5]
[🆕 Blog] [🗨️ Chat Demo] [🤗 HF Demo] [🚀 快速開始] [📖 文檔]
🚀 快速開始
⚠️ 重要提示
根據我們的經驗,InternViT V2.5系列比傳統計算機視覺任務更適合構建多模態大語言模型(MLLM)。
import torch
from PIL import Image
from transformers import AutoModel, CLIPImageProcessor
model = AutoModel.from_pretrained(
'OpenGVLab/InternViT-6B-448px-V1-5',
torch_dtype=torch.bfloat16,
low_cpu_mem_usage=True,
trust_remote_code=True).cuda().eval()
image = Image.open('./examples/image1.jpg').convert('RGB')
image_processor = CLIPImageProcessor.from_pretrained('OpenGVLab/InternViT-6B-448px-V1-5')
pixel_values = image_processor(images=image, return_tensors='pt').pixel_values
pixel_values = pixel_values.to(torch.bfloat16).cuda()
outputs = model(pixel_values)
✨ 主要特性
- 分辨率擴展:將訓練圖像的分辨率從448×448擴展到動態的448×448,基本圖塊大小為448×448,圖塊數量範圍為1到12。
- 數據增強:增強了預訓練數據集的數據規模、質量和多樣性,使得模型具備強大的魯棒性、OCR能力和高分辨率處理能力。
- 參數優化:為了便於使用和節省GPU內存,丟棄了最後3個塊,模型參數從59億減少到55億。
📦 安裝指南
文檔中未提供具體安裝步驟,故跳過該章節。
💻 使用示例
基礎用法
import torch
from PIL import Image
from transformers import AutoModel, CLIPImageProcessor
model = AutoModel.from_pretrained(
'OpenGVLab/InternViT-6B-448px-V1-5',
torch_dtype=torch.bfloat16,
low_cpu_mem_usage=True,
trust_remote_code=True).cuda().eval()
image = Image.open('./examples/image1.jpg').convert('RGB')
image_processor = CLIPImageProcessor.from_pretrained('OpenGVLab/InternViT-6B-448px-V1-5')
pixel_values = image_processor(images=image, return_tensors='pt').pixel_values
pixel_values = pixel_values.to(torch.bfloat16).cuda()
outputs = model(pixel_values)
📚 詳細文檔
模型詳情
屬性 |
詳情 |
模型類型 |
視覺基礎模型,特徵骨幹網絡 |
訓練數據 |
LAION-en、LAION-zh、COYO、GRIT、COCO、TextCaps、Objects365、OpenImages、All-Seeing、Wukong-OCR、LaionCOCO-OCR等數據集。為增強模型的OCR能力,在通用字幕數據集的基礎上加入了額外的OCR數據。具體來說,使用PaddleOCR對悟空圖像進行中文OCR,對LAION-COCO圖像進行英文OCR。 |
注意事項 |
InternViT-6B最初有48個塊,發現使用倒數第四塊後的輸出對MLLM效果最佳。為便於使用和節省GPU內存,丟棄了最後3個塊。現在模型只有45個塊,參數數量從59億減少到55億。因此,如果要基於此模型構建MLLM,請使用最後一層的特徵。 |
引用信息
如果您在研究中發現本項目有用,請考慮引用以下文獻:
@article{chen2024expanding,
title={Expanding Performance Boundaries of Open-Source Multimodal Models with Model, Data, and Test-Time Scaling},
author={Chen, Zhe and Wang, Weiyun and Cao, Yue and Liu, Yangzhou and Gao, Zhangwei and Cui, Erfei and Zhu, Jinguo and Ye, Shenglong and Tian, Hao and Liu, Zhaoyang and others},
journal={arXiv preprint arXiv:2412.05271},
year={2024}
}
@article{gao2024mini,
title={Mini-internvl: A flexible-transfer pocket multimodal model with 5\% parameters and 90\% performance},
author={Gao, Zhangwei and Chen, Zhe and Cui, Erfei and Ren, Yiming and Wang, Weiyun and Zhu, Jinguo and Tian, Hao and Ye, Shenglong and He, Junjun and Zhu, Xizhou and others},
journal={arXiv preprint arXiv:2410.16261},
year={2024}
}
@article{chen2024far,
title={How Far Are We to GPT-4V? Closing the Gap to Commercial Multimodal Models with Open-Source Suites},
author={Chen, Zhe and Wang, Weiyun and Tian, Hao and Ye, Shenglong and Gao, Zhangwei and Cui, Erfei and Tong, Wenwen and Hu, Kongzhi and Luo, Jiapeng and Ma, Zheng and others},
journal={arXiv preprint arXiv:2404.16821},
year={2024}
}
@inproceedings{chen2024internvl,
title={Internvl: Scaling up vision foundation models and aligning for generic visual-linguistic tasks},
author={Chen, Zhe and Wu, Jiannan and Wang, Wenhai and Su, Weijie and Chen, Guo and Xing, Sen and Zhong, Muyan and Zhang, Qinglong and Zhu, Xizhou and Lu, Lewei and others},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={24185--24198},
year={2024}
}
📄 許可證
本項目採用MIT許可證發佈。