🚀 vit_huge_patch14_224.mae模型卡片
这是一个视觉变换器(ViT)图像特征模型,使用自监督掩码自编码器(MAE)方法在ImageNet - 1k上进行了预训练,可用于图像特征提取等任务。
🚀 快速开始
本模型是基于视觉变换器(ViT)架构的图像特征模型,使用自监督掩码自编码器(MAE)方法在ImageNet - 1k数据集上进行预训练。以下是使用示例:
💻 使用示例
基础用法
图像分类
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('vit_huge_patch14_224.mae', pretrained=True)
model = model.eval()
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0))
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
图像嵌入
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'vit_huge_patch14_224.mae',
pretrained=True,
num_classes=0,
)
model = model.eval()
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0))
output = model.forward_features(transforms(img).unsqueeze(0))
output = model.forward_head(output, pre_logits=True)
📚 详细文档
模型详情
属性 |
详情 |
模型类型 |
图像分类 / 特征骨干网络 |
模型统计信息 |
参数数量(M):630.8 GMACs:167.4 激活值数量(M):139.4 图像尺寸:224 x 224 |
相关论文 |
Masked Autoencoders Are Scalable Vision Learners: https://arxiv.org/abs/2111.06377 An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2 |
预训练数据集 |
ImageNet - 1k |
原始代码库 |
https://github.com/facebookresearch/mae |
模型比较
你可以在timm 模型结果 中探索该模型的数据集和运行时指标。
引用
@Article{MaskedAutoencoders2021,
author = {Kaiming He and Xinlei Chen and Saining Xie and Yanghao Li and Piotr Doll{'a}r and Ross Girshick},
journal = {arXiv:2111.06377},
title = {Masked Autoencoders Are Scalable Vision Learners},
year = {2021},
}
@article{dosovitskiy2020vit,
title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale},
author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil},
journal={ICLR},
year={2021}
}
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
📄 许可证
本模型使用CC - BY - NC - 4.0许可证。