M

Marqo Fashionsiglip

Developed by Marqo
Marqo-FashionSigLIP is a multimodal embedding model optimized for fashion product search, with a 57% improvement in MRR and recall rate compared to FashionCLIP.
Downloads 493.25k
Release Time : 8/9/2024

Model Overview

This model is trained using generalized contrastive learning and supports the retrieval of fashion products based on various features such as text descriptions, categories, styles, colors, and materials, providing highly relevant search results.

Model Features

Generalized contrastive learning
Trained using generalized contrastive learning (GCL), supporting multimodal retrieval and ranking to improve search relevance.
Multimodal embedding
Capable of processing both image and text inputs simultaneously to generate a unified embedding representation.
Optimized for the fashion domain
Specifically optimized for fashion products and performs well on multiple fashion datasets.

Model Capabilities

Zero-shot image classification
Multimodal retrieval
Fashion product search
Image-text matching

Use Cases

E-commerce
Fashion product search
Search for relevant fashion products based on text descriptions or images.
The average recall rate has been improved by 57% on multiple fashion datasets.
Product classification
Automatically classify fashion products.
The average precision rate in the category-to-product task reaches 0.737.
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
Š 2025AIbase