# Cross-lingual Adaptation
Arabic Clip Vit Base Patch32
Arabic CLIP is an adapted version of the Contrastive Language-Image Pre-training (CLIP) model for Arabic, capable of learning concepts from images and associating them with Arabic text descriptions.
Text-to-Image Arabic
A
LinaAlhuri
33
2
Mdpr Tied Pft Msmarco Ft All
This model is a dense retrieval model further fine-tuned on all Mr. TyDi training data based on the castorini/mdpr-tied-pft-msmarco checkpoint.
Large Language Model
Transformers

M
castorini
386
0
Afro Xlmr Mini
MIT
AfroXLMR-mini is created by adapting the XLM-R-miniLM model through masked language model (MLM) training on 17 African languages, covering major African language families and three high-resource languages (Arabic, French, and English).
Large Language Model
Transformers

A
Davlan
66
0
Featured Recommended AI Models