# RoBERTa-based
Ko Sroberta Nli
This is a Korean sentence embedding model based on sentence-transformers, capable of mapping sentences and paragraphs into a 768-dimensional dense vector space.
Text Embedding Korean
K
jhgan
3,840
8
Roberta Base Bne Squad 2.0 Es
A RoBERTa-based Spanish Q&A model fine-tuned on the squad_es dataset, suitable for Spanish reading comprehension tasks
Question Answering System
Transformers Spanish

R
jamarju
20
0
Camembert Base Legacy
CamemBERT is a French language model based on RoBERTa, this version was trained on 4GB of Wikipedia text
Large Language Model
Transformers French

C
almanach
24.98k
6
Ko Sroberta Sts
This is a Korean sentence embedding model based on sentence-transformers, capable of mapping sentences and paragraphs into a 768-dimensional dense vector space.
Text Embedding
K
jhgan
86
0
Klue Roberta Base Sae
RoBERTa model trained on Korean dataset for sentence intent understanding tasks
Large Language Model
Transformers

K
ehdwns1516
26
0
Featured Recommended AI Models