# Case-insensitive

Ag Nli Bert Mpnet Base Uncased Sentence Similarity V1
This is a sentence-transformers model that maps sentences and paragraphs to a 768-dimensional dense vector space, suitable for tasks like clustering or semantic search.
Text Embedding Transformers Other
A
abbasgolestani
18
0
Sentence Bert Base Italian Uncased
MIT
This is a case-insensitive Italian BERT model based on sentence-transformers, designed to generate 768-dimensional dense vector representations for sentences and paragraphs.
Text Embedding Transformers Other
S
nickprock
3,228
10
Bert Mini
MIT
One of the small BERT models released by Google Research, English-only, case-insensitive, trained with WordPiece masking.
Large Language Model Transformers English
B
lyeonii
263
1
Legacy1
BERT (Bidirectional Encoder Representations from Transformers) is a pre-trained language model based on the Transformer architecture, developed by Google.
Large Language Model Transformers
L
khoon485
19
0
Roberta Small Greek
This is a small Greek language model based on the RoBERTa architecture, with a parameter scale approximately half of the base model. It is suitable for the masked filling task of Greek text.
Large Language Model Transformers Other
R
ClassCat
22
2
Bert Tiny Uncased
Apache-2.0
This is a tiny version of the BERT model, case-insensitive, suitable for natural language processing tasks in resource-constrained environments.
Large Language Model Transformers English
B
gaunernst
3,297
4
Distilbert Base Uncased Mnli
A case-insensitive DistilBERT model fine-tuned on the Multi-Genre Natural Language Inference (MNLI) dataset, optimized for zero-shot classification tasks
Text Classification Transformers English
D
optimum
53
3
Roberta TR Medium Bpe 44k
A RoBERTa model based on Turkish, pre-trained with masked language modeling (MLM) objective, case-insensitive.
Large Language Model Transformers Other
R
ctoraman
48
0
Roberta TR Medium Wp 44k
A RoBERTa model for Turkish language, pre-trained with masked language modeling objective, case-insensitive, suitable for Turkish text processing tasks.
Large Language Model Transformers Other
R
ctoraman
84
0
Roberta TR Medium Bpe 16k
A RoBERTa model pre-trained on Turkish with masked language modeling (MLM) objective, case-insensitive, medium-sized architecture.
Large Language Model Transformers Other
R
ctoraman
26
0
English Pert Base
PERT is a BERT-based pre-trained language model designed for English text processing tasks.
Large Language Model Transformers English
E
hfl
37
6
Bert Base Uncased Squad V1
MIT
A Q&A system model fine-tuned on the SQuAD1.1 dataset based on BERT-base uncased model
Question Answering System English
B
csarron
1,893
13
Indobert Lite Large P1
MIT
IndoBERT is an advanced language model for Indonesian, based on the BERT architecture, trained using masked language modeling and next sentence prediction objectives.
Large Language Model Transformers Other
I
indobenchmark
42
0
Xlarge
Apache-2.0
Funnel Transformer is an English text pre-training model based on self-supervised learning, adopting objectives similar to ELECTRA, achieving efficient language processing by filtering sequence redundancy.
Large Language Model Transformers English
X
funnel-transformer
31
1
Gpt2 Small Indonesian 522M
MIT
This is a GPT2-small model pretrained on Indonesian Wikipedia data, specializing in Indonesian text generation tasks.
Large Language Model Other
G
cahya
1,900
9
Indobert Lite Base P2
MIT
IndoBERT is a top-tier language model developed for Indonesian, based on the BERT architecture, trained using masked language modeling and next sentence prediction objectives.
Large Language Model Transformers Other
I
indobenchmark
2,498
0
Ruperta Base
RuPERTa is a case-insensitive RoBERTa model trained on a large Spanish corpus, using RoBERTa's improved pre-training methods, suitable for various Spanish NLP tasks.
Large Language Model Spanish
R
mrm8488
39
2
Intermediate
Apache-2.0
Transformer model pre-trained on English corpus using ELECTRA-like objectives, acquiring text representations through self-supervised learning
Large Language Model Transformers English
I
funnel-transformer
24
0
Kinyaroberta Small
This is a RoBERTa model pretrained on Kinyarwanda datasets using Masked Language Modeling (MLM) objective, with case-insensitive tokenization during pretraining.
Large Language Model Transformers
K
jean-paul
38
0
Sbert Uncased Finnish Paraphrase
Finnish sentence BERT model based on FinBERT training, used for sentence similarity calculation and feature extraction
Text Embedding Transformers Other
S
TurkuNLP
895
2
Sbert Large Nlu Ru
MIT
This is a large Russian language model based on the BERT architecture, specifically designed for generating sentence embeddings with case-insensitive processing support.
Text Embedding Transformers Other
S
ai-forever
386.96k
84
Small
Apache-2.0
Transformer model pre-trained on English corpus using ELECTRA-like objectives, suitable for text feature extraction and downstream task fine-tuning
Large Language Model Transformers English
S
funnel-transformer
6,084
5
Danish Bert Botxo
A Danish BERT model developed by Certainly (formerly BotXO), supporting case-insensitive Danish text processing.
Large Language Model Other
D
Maltehb
1,306
14
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase