A

Arabic Clip Vit Base Patch32

Developed by LinaAlhuri
Arabic CLIP is an adapted version of the Contrastive Language-Image Pre-training (CLIP) model for Arabic, capable of learning concepts from images and associating them with Arabic text descriptions.
Downloads 33
Release Time : 3/31/2023

Model Overview

This model is an Arabic-adapted version based on the OpenAI CLIP architecture, focusing on improving visual information understanding and interpretation in Arabic contexts.

Model Features

Arabic Adaptation
Specifically optimized for Arabic, addressing data scarcity and translation quality issues in Arabic
Multi-dataset Training
Incorporates over 2 million Arabic image-text pairs, including real datasets and translated datasets
Zero-shot Learning Capability
Supports zero-shot learning and demonstrates excellent performance on multiple Arabic benchmarks

Model Capabilities

Image Understanding
Arabic Text-Image Association
Zero-shot Image Classification
Image Retrieval
Cross-modal Search

Use Cases

Image Retrieval
Arabic Concept Image Retrieval
Retrieve relevant images based on Arabic descriptions
MRR@10 reaches 0.244
Zero-shot Learning
Arabic Image Classification
Classify images directly without training
Top-1 accuracy 17.58%
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
Š 2025AIbase