đ Taiyi-CLIP-RoBERTa-102M-ViT-L-Chinese
The first open-source Chinese CLIP model, with the text encoder RoBERTa-base pre-trained on 123 million image-text pairs.
đ Quick Start
Taiyi-CLIP-RoBERTa-102M-ViT-L-Chinese is an open-source Chinese CLIP model. It offers powerful visual - language intelligence and can be used for various multimodal tasks.
⨠Features
- Open - source Chinese CLIP: It's the first open - source Chinese CLIP in the Huggingface community.
- Powerful Representation: Follows the CLIP experimental setup to obtain strong visual - language representations.
- Stable Training: Freezes the visual encoder and fine - tunes the language encoder for fast and stable pre - training.
đĻ Installation
No specific installation steps are provided in the original document, so this section is skipped.
đģ Usage Examples
Basic Usage
from PIL import Image
import requests
import open_clip
import torch
from transformers import BertModel, BertConfig, BertTokenizer
from transformers import CLIPProcessor, CLIPModel
import numpy as np
query_texts = ["ä¸åĒįĢ", "ä¸åĒį",'两åĒįĢ', '两åĒčč','ä¸åĒčč'] # Here is the input text, which can be replaced at will.
# Load Taiyi Chinese text encoder
text_tokenizer = BertTokenizer.from_pretrained("IDEA-CCNL/Taiyi-CLIP-RoBERTa-102M-ViT-L-Chinese")
text_encoder = BertModel.from_pretrained("IDEA-CCNL/Taiyi-CLIP-RoBERTa-102M-ViT-L-Chinese").eval()
url = "http://images.cocodataset.org/val2017/000000039769.jpg" # Here can be replaced with the url of any picture
# Load openclip's image encoder
clip_model, _, processor = open_clip.create_model_and_transforms('ViT-L-14', pretrained='openai')
clip_model = clip_model.eval()
text = text_tokenizer(query_texts, return_tensors='pt', padding=True)['input_ids']
image = processor(Image.open(requests.get(url, stream=True).raw)).unsqueeze(0)
with torch.no_grad():
image_features = clip_model.encode_image(image)
text_features = text_encoder(text)[1]
# Normalization
image_features = image_features / image_features.norm(dim=1, keepdim=True)
text_features = text_features / text_features.norm(dim=1, keepdim=True)
# Calculate cosine similarity, logit_scale is the scale coefficient
logit_scale = clip_model.logit_scale.exp()
logits_per_image = logit_scale * image_features @ text_features.t()
logits_per_text = logits_per_image.t()
probs = logits_per_image.softmax(dim=-1).cpu().numpy()
print(np.around(probs, 3))
đ Documentation
Model Taxonomy
Property |
Details |
Demand |
Special |
Task |
Multimodal |
Series |
Taiyi |
Model |
CLIP (RoBERTa) |
Parameter |
102M |
Extra |
Chinese |
Model Information
We follow the experimental setup of CLIP to obtain powerful visual - language intelligence. To obtain the CLIP for Chinese, we employ [chinese - roberta - wwm](https://huggingface.co/hfl/chinese - roberta - wwm - ext) for the language encoder, and apply the ViT - L - 14 in open_clip for the vision encoder. We freeze the vision encoder and tune the language encoder to speed up and stabilize the pre - training process. Moreover, we apply [Noah - Wukong](https://wukong - dataset.github.io/wukong - dataset/) dataset (100M) and Zero dataset (23M) as the pre - training datasets. The model was first trained 24 epochs on wukong and zero, which takes 6 days to train on A100x32. To the best of our knowledge, our Taiyi - CLIP is currently the only open - sourced Chinese CLIP in the huggingface community.
Downstream Performance
Zero - Shot Classification
Model |
Dataset |
Top1 |
Top5 |
Taiyi - CLIP - RoBERTa - 102M - ViT - L - Chinese |
ImageNet1k - CN |
55.04% |
81.75% |
Zero - Shot Text - to - Image Retrieval
Model |
Dataset |
Top1 |
Top5 |
Top10 |
Taiyi - CLIP - RoBERTa - 102M - ViT - L - Chinese |
Flickr30k - CNA - test |
58.32% |
82.96% |
89.40% |
Taiyi - CLIP - RoBERTa - 102M - ViT - L - Chinese |
COCO - CN - test |
55.27% |
81.10% |
90.78% |
Taiyi - CLIP - RoBERTa - 102M - ViT - L - Chinese |
wukong50k |
64.95% |
91.77% |
96.28% |
đ§ Technical Details
No specific technical details beyond what's already in the "Model Information" section are provided, so no additional content is added.
đ License
The model is licensed under the Apache 2.0 license.
đ Citation
If you are using the resource for your work, please cite the our paper:
@article{fengshenbang,
author = {Jiaxing Zhang and Ruyi Gan and Junjie Wang and Yuxiang Zhang and Lin Zhang and Ping Yang and Xinyu Gao and Ziwei Wu and Xiaoqun Dong and Junqing He and Jianheng Zhuo and Qi Yang and Yongfeng Huang and Xiayu Li and Yanghan Wu and Junyu Lu and Xinyu Zhu and Weifeng Chen and Ting Han and Kunhao Pan and Rui Wang and Hao Wang and Xiaojun Wu and Zhongshen Zeng and Chongpei Chen},
title = {Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence},
journal = {CoRR},
volume = {abs/2209.02970},
year = {2022}
}
You can also cite our [website](https://github.com/IDEA - CCNL/Fengshenbang - LM/):
@misc{Fengshenbang-LM,
title={Fengshenbang-LM},
author={IDEA-CCNL},
year={2021},
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}