# Lightweight embedding

Multilingual E5 Small Ko V2
Apache-2.0
A Korean sentence transformer fine-tuned based on intfloat/multilingual-e5-small for Korean retrieval tasks
Text Embedding Supports Multiple Languages
M
dragonkue
252
2
Dragonkue KoEn E5 Tiny ONNX
Apache-2.0
This is a sentence-transformers model fine-tuned from intfloat/multilingual-e5-small, specifically optimized for Korean retrieval tasks, mapping text to a 384-dimensional vector space.
Text Embedding Supports Multiple Languages
D
exp-models
51
1
Glucose Base Ja V2
Apache-2.0
General-purpose Japanese text embedding model, optimized for retrieval tasks with excellent performance on CPUs
Text Embedding Japanese
G
pkshatech
25.25k
20
Ce Esci MiniLM L12 V2
This is a model based on sentence-transformers that maps sentences and paragraphs into a 384-dimensional dense vector space, suitable for tasks such as clustering or semantic search.
Text Embedding
C
metarank
1,132
0
Kpf Sbert 128d V1
This is a sentence embedding model based on sentence-transformers, capable of mapping sentences and paragraphs into a 128-dimensional dense vector space, suitable for tasks such as clustering or semantic search.
Text Embedding
K
bongsoo
759
3
All Datasets V4 MiniLM L12
A sentence embedding model fine-tuned on over 1 billion sentence pairs through self-supervised contrastive learning based on MiniLM-L12, capable of generating high-quality semantic vector representations
Text Embedding English
A
flax-sentence-embeddings
2,084
2
All Datasets V3 MiniLM L6
A sentence embedding model based on MiniLM architecture, trained on over 1 billion sentence pairs through self-supervised contrastive learning, capable of generating high-quality sentence vector representations
Text Embedding English
A
flax-sentence-embeddings
46
0
Rut5 Base
MIT
A streamlined version based on google/mt5-base, optimized for Russian and English with 58% fewer parameters
Large Language Model Supports Multiple Languages
R
cointegrated
27.85k
11
All Datasets V3 MiniLM L12
A sentence embedding model based on MiniLM-L12 architecture, trained on over 1 billion sentence pairs through contrastive learning, capable of generating high-quality semantic vector representations
Text Embedding English
A
flax-sentence-embeddings
887
1
Test
MIT
This is a tiny Russian BERT model suitable for various natural language processing tasks.
Text Embedding Transformers Supports Multiple Languages
T
k0t1k
21
0
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase