# Unsupervised learning

The Teacher V 2
This is a transformers model for zero-shot classification tasks, which can classify text without a large amount of labeled data.
Text Classification Transformers
T
shiviklabs
172
0
HVI CIDNet Generalization
MIT
HVI-CIDNet is a deep learning model designed for low-light image enhancement, based on a novel HVI color space.
Image Enhancement
H
Fediory
1,107
0
LLAMA 3 8B Unaligned BETA GGUF
An 8B-parameter unaligned beta model based on the LLaMA-3 architecture, offering multiple quantization versions to suit different hardware needs
Large Language Model
L
bartowski
542
10
Yolo11 Fish Detector Grayscale
Grayscale underwater fish detection model trained on YOLO11n architecture using semi-supervised learning techniques
Object Detection TensorBoard English
Y
akridge
38
1
Depth Anything Vits14
Depth Anything is a depth estimation model that leverages large-scale unlabeled data to enhance performance, suitable for monocular depth estimation tasks.
3D Vision Transformers
D
LiheYoung
8,130
6
Dpt Dinov2 Small Kitti
Apache-2.0
DPT model using DINOv2 as backbone for depth estimation tasks.
3D Vision Transformers
D
facebook
710
7
Unsup Simcse Ja Base
This is an unsupervised SimCSE-based Japanese sentence embedding model, specifically designed for generating high-quality Japanese sentence embeddings.
Text Embedding Transformers Japanese
U
cl-nagoya
190
2
Bart Large Paper2slides Summarizer
MIT
A summarization model based on the Bart-Large architecture, specifically designed to accurately summarize research paper content into a format suitable for slide presentations.
Text Generation Transformers English
B
com3dian
26
7
Zero Shot Classify SSTuning ALBERT
MIT
A zero-shot text classification model trained with Self-Supervised Tuning (SSTuning), based on ALBERT-xxlarge-v2 architecture, which can be directly applied to tasks like sentiment analysis and topic classification without fine-tuning.
Text Classification Transformers
Z
DAMO-NLP-SG
98
4
Zero Shot Implicit Bi Encoder
MIT
A zero-shot text classification model based on sentence-transformers, achieving text classification without labeled data through implicit training
Text Classification Transformers English
Z
claritylab
31
1
Zero Shot
Portuguese zero-shot classification model based on DeBERTa-v3 architecture, suitable for performing text classification tasks without fine-tuning
Large Language Model Transformers Other
Z
Mel-Iza0
71
2
Congen Paraphrase Multilingual Mpnet Base V2
Apache-2.0
This is a multilingual sentence embedding model based on the ConGen framework, which maps sentences to a 768-dimensional vector space, suitable for tasks such as semantic search.
Text Embedding Transformers
C
kornwtp
329
3
Congen Simcse Model Roberta Base Thai
Apache-2.0
This is a Thai sentence similarity model based on the RoBERTa architecture, capable of mapping sentences into a 768-dimensional vector space, suitable for tasks like semantic search.
Text Embedding Transformers
C
kornwtp
86
1
Fasttext Classification
Experimental classification model based on fastText word vectors, supporting zero-shot classification tasks
Text Classification Transformers Japanese
F
paulhindemith
49
0
Bart Large Citesum Title
A text summarization model fine-tuned on the CiteSum dataset based on facebook/bart-large, specifically designed for generating title-style summaries of scientific literature.
Text Generation Transformers English
B
yuningm
25
1
Anomaly
A PyTorch-based image classification model for detecting anomalies in images.
Image Classification Transformers
A
hafidber
32
2
Gpt 2 Spanish
Apache-2.0
A GPT-2 model trained from scratch on the OSCAR Spanish corpus, supporting Spanish text generation tasks
Large Language Model Spanish
G
flax-community
2,075
27
TILDE
TILDE is a model based on the BERT architecture, mainly used for text retrieval and language modeling tasks.
Large Language Model Transformers
T
ielab
134
3
Flaubert Base Uncased
MIT
FlauBERT is a French BERT model trained on a large-scale French corpus, developed by the French National Center for Scientific Research.
Large Language Model Transformers French
F
flaubert
1,838
3
Wav2vec2 Base En Voxpopuli V2
A Wav2Vec2 base model pre-trained on 24.1k hours of unlabeled English data from the VoxPopuli corpus, suitable for speech recognition tasks.
Speech Recognition Transformers English
W
facebook
35
1
Chemical Bert Uncased Tsdae
Apache-2.0
A chemical domain BERT model trained based on TSDAE (Transformer-based Sequential Denoising Auto-Encoder), focusing on sentence similarity tasks
Text Embedding Transformers
C
recobo
16
0
Awesome Fb Model
This is a model based on zero-shot classification technology, capable of classifying text without specific training.
Text Classification Transformers
A
ClaudeYang
538
1
SBERT Base Nli V2
SBERT-base-nli-v2 is a transformer-based sentence embedding model specifically designed for sentence similarity calculation and semantic search tasks.
Text Embedding Transformers
S
Muennighoff
138
0
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase