M

Mxbai Embed Large V1 Gguf

Developed by ChristianAzinn
mxbai-embed-large-v1 is a sentence embedding model based on the BERT-large architecture, trained using AnglE loss function, supporting English text embedding, and offering multiple quantized versions to meet various needs.
Downloads 646
Release Time : 4/7/2024

Model Overview

This is a high-quality sentence embedding model based on the BERT-large architecture, trained on large-scale data using the AnglE loss function. The model provides multiple quantized versions from 2-bit to 32-bit, suitable for different computational resource scenarios.

Model Features

High-Quality Sentence Embedding
Trained on large-scale high-quality data using the AnglE loss function, achieving SOTA performance at the BERT-large scale.
Multiple Quantized Versions
Provides multiple quantized versions from 2-bit (Q2_K) to 32-bit (FP32), catering to different computational resource needs.
512-Token Context Length
Supports up to 512 tokens of context length, suitable for processing longer texts.
Broad Compatibility
Compatible with mainstream inference frameworks such as llama.cpp and LM Studio.

Model Capabilities

Text Embedding
Semantic Search
Information Retrieval
Text Similarity Calculation

Use Cases

Search & Retrieval
Semantic Search
Convert queries and documents into embedding vectors for semantic similarity matching.
Improves the relevance of search results.
Document Clustering
Perform clustering analysis on documents based on embedding vectors.
Discovers semantic relationships between documents.
Recommendation Systems
Content Recommendation
Recommend related content based on embedding similarity.
Improves recommendation accuracy and diversity.
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
Š 2025AIbase