M

Ms Marco TinyBERT L2 V2

Developed by cross-encoder
A lightweight cross-encoder trained on the MS Marco passage ranking task for query-passage relevance scoring in information retrieval
Downloads 247.59k
Release Time : 3/2/2022

Model Overview

This model is specifically designed for information retrieval tasks, capable of scoring the relevance between queries and passages, suitable for search engine result reranking scenarios. Optimized based on the BERT-Tiny architecture, it maintains high performance while offering extremely fast inference speed.

Model Features

Efficient and Lightweight
Based on TinyBERT architecture, the model is compact and offers fast inference speed (9000 documents/second)
Precise Ranking
Excellent performance on TREC 2019 DL and MS Marco datasets, achieving NDCG@10 of 69.84
Plug-and-Play
Compatible with HuggingFace Transformers and SentenceTransformers libraries, easy to integrate

Model Capabilities

Query-passage relevance scoring
Search result reranking
Information retrieval

Use Cases

Search Engine Optimization
Search Result Reranking
Fine-grained sorting of initial results returned by retrieval systems like ElasticSearch
Improves the quality of relevance ranking in search results
Question Answering Systems
Answer Passage Filtering
Selecting the most relevant results from candidate answer passages
Enhances the accuracy of question-answering systems
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
Š 2025AIbase