B

Bge Base En V1.5 Gguf

Developed by CompendiumLabs
This project provides the BGE embedding model stored in GGUF format, which is suitable for use with llama.cpp and offers better performance than transformers.
Downloads 1,108
Release Time : 2/17/2024

Model Overview

The GGUF format version of the BGE embedding model, focusing on text embedding tasks and suitable for scenarios requiring efficient processing of embedding vectors.

Model Features

GGUF Format Optimization
Stored in GGUF format, it can bring significant performance improvements when used with llama.cpp
Multiple Quantization Options
Four quantization versions, F32, F16, Q8_0, and Q4_K_M, are provided to meet different precision and performance requirements
CPU Acceleration
It can achieve up to 30% acceleration on the CPU while maintaining minimal precision loss

Model Capabilities

Text Embedding
Batch Processing
Efficient Inference

Use Cases

Information Retrieval
Document Similarity Calculation
Calculate the semantic similarity between documents
Natural Language Processing
Semantic Search
Build a search system based on semantics rather than keywords
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase