R

Reranker Bert Tiny Gooaq Bce

Developed by cross-encoder-testing
This is a cross-encoder model fine-tuned from bert-tiny for calculating similarity scores between text pairs, suitable for various tasks such as semantic textual similarity and semantic search.
Downloads 37.19k
Release Time : 2/26/2025

Model Overview

This model is based on the BERT-tiny architecture and developed using the sentence-transformers library. It is primarily used to calculate similarity scores between text pairs, applicable to tasks like semantic textual similarity, semantic search, paraphrase mining, text classification, and clustering.

Model Features

Efficient and Lightweight
Based on the BERT-tiny architecture, the model is compact and computationally efficient.
Versatile for Multiple Tasks
Applicable to various tasks such as semantic textual similarity, semantic search, paraphrase mining, and text classification.
High Performance
Performs well on multiple evaluation datasets, especially achieving a map of 0.5677 on the GooAQ-dev dataset.

Model Capabilities

Calculate Text Similarity
Semantic Search
Text Classification
Text Clustering
Paraphrase Mining

Use Cases

Information Retrieval
Answer Reranking in QA Systems
Rerank candidate answers by relevance to improve QA system quality
Achieved a map of 0.5677 on the GooAQ-dev dataset.
Content Recommendation
Related Content Recommendation
Recommend related content based on user queries
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase