M

Mmarco Mminilmv2 L12 H384 V1

Developed by cross-encoder
A multilingual text ranking model trained on the MMARCO dataset, supporting information retrieval tasks in 14 languages
Downloads 42.56k
Release Time : 6/1/2022

Model Overview

This model is a multilingual cross-encoder specifically designed for information retrieval scenarios. Given a query, it can encode all possible passages and rank them by score, suitable for re-ranking tasks in multilingual search engines.

Model Features

Multilingual support
Supports text ranking tasks in 14 languages, with excellent performance on the MMARCO dataset
Efficient architecture
Lightweight architecture based on MiniLMv2, with 12 Transformer layers and a 384-dimensional hidden layer
Information retrieval optimization
Specifically designed for query-passage relevance scoring tasks in search engines

Model Capabilities

Multilingual text ranking
Query-passage relevance scoring
Information retrieval result re-ranking

Use Cases

Search engines
Multilingual search result re-ranking
Re-ranking the relevance of results returned by retrieval systems like ElasticSearch
Improves the relevance and accuracy of search results
Question answering systems
Candidate answer ranking
Ranking multiple candidate answers generated by a question answering system by relevance
Helps the system select the most relevant answer
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
Š 2025AIbase