Tecoa4 Clip
T
Tecoa4 Clip
Developed by chs20
TeCoA is a vision-language model initialized from OpenAI CLIP, enhanced with supervised adversarial fine-tuning for improved robustness
Downloads 51
Release Time : 2/23/2024
Model Overview
This model is based on the CLIP ViT-L/14 architecture, adversarially fine-tuned on ImageNet using Lâ norm with radius 4/255 to enhance robustness against adversarial attacks. Primarily used for zero-shot image classification tasks.
Model Features
Adversarial Robustness
Significantly improves resistance to adversarial attacks through Lâ norm training with radius 4/255
Zero-shot Capability
Retains CLIP's zero-shot classification ability, applicable to novel categories without task-specific fine-tuning
Supervised Fine-tuning
Adversarial fine-tuning on ImageNet dataset balances accuracy and robustness
Model Capabilities
Zero-shot image classification
Adversarially robust image recognition
Cross-modal understanding (image-text)
Use Cases
Computer Vision
Safety-critical System Image Recognition
Reliable image classification in adversarial environments, suitable for autonomous driving, security systems
Maintains higher accuracy under adversarial attacks compared to standard CLIP
Open-domain Image Understanding
Leverages zero-shot capability to recognize unseen object categories
Featured Recommended AI Models
Š 2025AIbase