đ Taiyi-CLIP-Roberta-102M-Chinese
The first open - source Chinese CLIP model, pre - trained on 123 million image - text pairs with a text encoder of RoBERTa - base.
đ Quick Start
Taiyi-CLIP-Roberta-102M-Chinese is an open - source Chinese CLIP model. It follows the experimental setup of CLIP to obtain powerful visual - language intelligence. You can use it for tasks like zero - shot image classification and feature extraction.
⨠Features
- Open - source Chinese CLIP: It is the first open - source Chinese CLIP in the Huggingface community.
- Powerful Representation: Follows the CLIP experimental setup to get strong visual - language representations.
- Stable Training: Freezes the vision encoder and only fine - tunes the language encoder for stable pre - training.
đĻ Installation
No specific installation steps are provided in the original document, so this section is skipped.
đģ Usage Examples
Basic Usage
from PIL import Image
import requests
import clip
import torch
from transformers import BertForSequenceClassification, BertConfig, BertTokenizer
from transformers import CLIPProcessor, CLIPModel
import numpy as np
query_texts = ["ä¸åĒįĢ", "ä¸åĒį",'两åĒįĢ', '两åĒčč','ä¸åĒčč']
text_tokenizer = BertTokenizer.from_pretrained("IDEA-CCNL/Taiyi-CLIP-Roberta-102M-Chinese")
text_encoder = BertForSequenceClassification.from_pretrained("IDEA-CCNL/Taiyi-CLIP-Roberta-102M-Chinese").eval()
text = text_tokenizer(query_texts, return_tensors='pt', padding=True)['input_ids']
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
clip_model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32")
processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32")
image = processor(images=Image.open(requests.get(url, stream=True).raw), return_tensors="pt")
with torch.no_grad():
image_features = clip_model.get_image_features(**image)
text_features = text_encoder(text).logits
image_features = image_features / image_features.norm(dim=1, keepdim=True)
text_features = text_features / text_features.norm(dim=1, keepdim=True)
logit_scale = clip_model.logit_scale.exp()
logits_per_image = logit_scale * image_features @ text_features.t()
logits_per_text = logits_per_image.t()
probs = logits_per_image.softmax(dim=-1).cpu().numpy()
print(np.around(probs, 3))
đ Documentation
Model Taxonomy
Property |
Details |
Demand |
Special |
Task |
Multimodal |
Series |
Taiyi |
Model |
CLIP (Roberta) |
Parameter |
102M |
Extra |
Chinese |
Model Information
We follow the experimental setup of CLIP to obtain powerful visual - language intelligence. To obtain the CLIP for Chinese, we employ [chinese - roberta - wwm](https://huggingface.co/hfl/chinese - roberta - wwm - ext) for the language encoder, and apply the ViT - B - 32 in CLIP for the vision encoder. We freeze the vision encoder and tune the language encoder to speed up and stabilize the pre - training process. Moreover, we apply [Noah - Wukong](https://wukong - dataset.github.io/wukong - dataset/) dataset (100M) and Zero dataset (23M) as the pre - training datasets. We train 24 epochs, which takes 7 days to train on A100x16. To the best of our knowledge, our TaiyiCLIP is currently the only open - sourced Chinese CLIP in the huggingface community.
Downstream Performance
Zero - Shot Classification
Model |
Dataset |
Top1 |
Top5 |
Taiyi - CLIP - Roberta - 102M - Chinese |
ImageNet1k - CN |
42.85% |
71.48% |
Zero - Shot Text - to - Image Retrieval
Model |
Dataset |
Top1 |
Top5 |
Top10 |
Taiyi - CLIP - Roberta - 102M - Chinese |
Flickr30k - CNA - test |
46.32% |
74.58% |
83.44% |
Taiyi - CLIP - Roberta - 102M - Chinese |
COCO - CN - test |
47.10% |
78.53% |
87.84% |
Taiyi - CLIP - Roberta - 102M - Chinese |
wukong50k |
49.18% |
81.94% |
90.27% |
đ§ Technical Details
We follow the experimental setup of CLIP to obtain powerful visual - language intelligence. To obtain the CLIP for Chinese, we employ [chinese - roberta - wwm](https://huggingface.co/hfl/chinese - roberta - wwm - ext) for the language encoder, and apply the ViT - B - 32 in CLIP for the vision encoder. We freeze the vision encoder and tune the language encoder to speed up and stabilize the pre - training process. Moreover, we apply [Noah - Wukong](https://wukong - dataset.github.io/wukong - dataset/) dataset (100M) and Zero dataset (23M) as the pre - training datasets. We train 24 epochs, which takes 7 days to train on A100x16.
đ License
The model is licensed under the Apache - 2.0 license.
đ Citation
If you are using the resource for your work, please cite the our paper:
@article{fengshenbang,
author = {Jiaxing Zhang and Ruyi Gan and Junjie Wang and Yuxiang Zhang and Lin Zhang and Ping Yang and Xinyu Gao and Ziwei Wu and Xiaoqun Dong and Junqing He and Jianheng Zhuo and Qi Yang and Yongfeng Huang and Xiayu Li and Yanghan Wu and Junyu Lu and Xinyu Zhu and Weifeng Chen and Ting Han and Kunhao Pan and Rui Wang and Hao Wang and Xiaojun Wu and Zhongshen Zeng and Chongpei Chen},
title = {Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence},
journal = {CoRR},
volume = {abs/2209.02970},
year = {2022}
}
You can also cite our [website](https://github.com/IDEA - CCNL/Fengshenbang - LM/):
@misc{Fengshenbang-LM,
title={Fengshenbang-LM},
author={IDEA-CCNL},
year={2021},
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}