G

Gte Qwen2 1.5B Instruct GGUF

Developed by mav23
A 7B-parameter sentence embedding model based on the Qwen2 architecture, specializing in sentence similarity tasks with outstanding performance on the MTEB benchmark.
Downloads 169
Release Time : 10/11/2024

Model Overview

This model is a sentence embedding model based on the Qwen2 architecture, primarily used for calculating sentence similarity. It demonstrates robust performance across multiple MTEB benchmark tasks, including classification, clustering, retrieval, and reranking.

Model Features

Powerful sentence embedding capability
Excels in multiple MTEB benchmarks, capable of generating high-quality sentence embeddings.
Multi-task support
Supports various NLP tasks such as classification, clustering, retrieval, and reranking.
Large-scale parameters
The 7B-parameter scale provides strong representation learning capabilities.

Model Capabilities

Sentence similarity calculation
Text classification
Text clustering
Information retrieval
Reranking

Use Cases

E-commerce
Product review classification
Sentiment classification for Amazon product reviews
Achieved 96.61% accuracy on the AmazonPolarityClassification task
Counterfactual review detection
Identifying counterfactual reviews on Amazon
Achieved 83.99% accuracy on the AmazonCounterfactualClassification task
Finance
Bank customer service issue classification
Automatic classification of bank customer inquiries
Achieved 87.31% accuracy on the Banking77Classification task
Academic research
Paper clustering
Topic clustering for arXiv and biorxiv papers
Achieved a V-measure of 50.51 on the ArxivClusteringP2P task
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase