M BERT Distil 40
A model based on distilbert-base-multilingual, fine-tuned to align the embedding space for 40 languages, matching the embedding space of CLIP text encoder.
Text-to-Image
Transformers Supports Multiple Languages#Multilingual Text Embedding#CLIP-Compatible#40 Language Support

Downloads 46
Release Time : 3/2/2022
Model Overview
This is a multilingual text encoding model, fine-tuned to align with the embedding space of CLIP text encoder, supporting 40 languages.
Model Features
Multilingual Support
Supports text embedding for 40 languages, covering a wide range of linguistic diversity.
CLIP-Compatible
Fine-tuned to align with the embedding space of CLIP text encoder, enabling seamless integration with CLIP visual encoder.
Based on DistilBERT
Built upon the distilbert-base-multilingual-cased model, offering efficient performance.
Model Capabilities
Multilingual text embedding
Integration with CLIP visual encoder
Text processing for 40 languages
Use Cases
Multilingual Applications
Multilingual Image Captioning
Works with CLIP visual encoder to generate image captions in multiple languages.
Performs well in languages such as French, German, Spanish, Russian, Swedish, and Greek.
Cross-Language Search
Used for cross-language text and image search.
Featured Recommended AI Models