# 768-dimensional dense vectors

Xlm Roberta Ua Distilled
MIT
This is a fine-tuned sentence transformer model based on xlm-roberta-base, supporting English and Ukrainian, suitable for tasks like semantic textual similarity and semantic search.
Text Embedding Supports Multiple Languages
X
panalexeu
121
1
Ukr Paraphrase Multilingual Mpnet Base
Apache-2.0
A sentence embedding model optimized for Ukrainian, based on the multilingual MPNet architecture, suitable for semantic similarity and feature extraction tasks
Text Embedding
U
lang-uk
1,110
8
Turkish Base Bert Uncased Mean Nli Stsb Tr
MIT
This is a sentence embedding model based on the Turkish BERT model, suitable for sentence similarity calculation and semantic search tasks.
Text Embedding Transformers Other
T
atasoglu
744
2
E5 Base Mlqa Finetuned Arabic For Rag
This is a sentence-transformers-based model capable of mapping sentences and paragraphs into a 768-dimensional dense vector space, suitable for tasks like clustering or semantic search.
Text Embedding
E
OmarAlsaabi
92
5
Simcse Model XLMR
Apache-2.0
A sentence-transformers model based on XLM-R, trained using the SimCSE method, which maps sentences and paragraphs into a 768-dimensional dense vector space, suitable for tasks such as clustering or semantic search.
Text Embedding Transformers
S
kornwtp
20
0
Simcse Model Phayathaibert
Apache-2.0
This is a model based on sentence-transformers that can map sentences and paragraphs into a 768-dimensional dense vector space, suitable for tasks such as clustering or semantic search.
Text Embedding Transformers
S
kornwtp
123
2
ST NLI Ca Paraphrase Multilingual Mpnet Base
A multilingual sentence embedding model based on sentence-transformers, supporting Catalan, capable of mapping text to a 768-dimensional vector space
Text Embedding Transformers
S
projecte-aina
56
1
Sentence Transformers Paraphrase Multilingual Mpnet Base V2
Apache-2.0
Multilingual sentence embedding model that maps text to a 768-dimensional vector space, suitable for semantic search and clustering tasks
Text Embedding Transformers
S
tgsc
17
1
Sentence Bert Base Italian Uncased
MIT
This is a case-insensitive Italian BERT model based on sentence-transformers, designed to generate 768-dimensional dense vector representations for sentences and paragraphs.
Text Embedding Transformers Other
S
nickprock
3,228
10
Xlm Roberta De
German sentence embedding model based on XLM-RoBERTa architecture, mapping text to 768-dimensional vector space, suitable for semantic search and clustering tasks
Text Embedding Transformers Other
X
airnicco8
22
0
Sentence Bert Base
Italian sentence embedding model based on sentence-transformers, mapping text to a 768-dimensional vector space
Text Embedding Transformers Other
S
efederici
409
8
Sentencetransformer Bert Hinglish Big
This is a BERT-based sentence transformer model specifically optimized for Hinglish (Hindi-English mixed language), capable of mapping sentences into a 768-dimensional vector space
Text Embedding Transformers
S
aditeyabaral
16
0
Simcse Model M Bert Thai Cased
A Thai sentence embedding model based on mBERT, trained using the SimCSE method on Thai Wikipedia data, capable of mapping text to 768-dimensional vectors
Text Embedding Transformers
S
mrp
1,637
7
Patentsberta
Hybrid model based on deep NLP, using enhanced SBERT for patent distance calculation and classification
Text Embedding Transformers
P
AI-Growth-Lab
35.15k
41
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase