# Pre-trained Model

Seed Coder 8B Instruct V1
A general-purpose text generation model based on the Transformers library, supporting various natural language processing tasks
Large Language Model Transformers
S
wasmdashai
57
2
Sparklerl 7B Stage2 Aug
This is an automatically generated transformers model card, and the specific information is to be supplemented.
Large Language Model Transformers
S
sparkle-reasoning
1,551
1
Simple Plant Detection
Apache-2.0
This is an image classification model based on the Apache-2.0 license, specifically designed to recognize 30 different types of plants.
Image Classification Other
S
novinn
43
1
Moirai Moe 1.0 R Base
This is a pre-trained time series forecasting model, focusing on time series data analysis and prediction tasks.
Climate Model
M
Salesforce
374.41k
9
Medical Summarization
Apache-2.0
This is a medical summarization model trained on the Bilal-Mamji/Medical-summary dataset, primarily used for generating concise summaries from medical texts.
Text Generation English
M
Krishhh2912
20
0
Test Patchtsmixer
Apache-2.0
PatchTSMixer is a time series forecasting foundational model under IBM's Granite project, featuring an innovative Patch hybrid architecture suitable for various time series forecasting tasks.
Climate Model
T
ibm-research
5,300
0
Xflux Text Encoders
Apache-2.0
T5 (Text-to-Text Transfer Transformer) is a general-purpose text-to-text conversion model developed by Google, capable of handling various natural language processing tasks.
Large Language Model Transformers English
X
XLabs-AI
481.59k
17
Xunzi Qwen2 7B Upos
Apache-2.0
A Qwen2 model pre-trained on classical Chinese texts for POS tagging tasks, supporting Universal Dependencies annotation.
Sequence Labeling Transformers Other
X
KoichiYasuoka
16
1
Dac 44khz
This is a feature extraction model based on 🤗 transformers. Specific functions and uses require further information.
Large Language Model Transformers
D
descript
192.61k
6
Dart V2 Vectors
Apache-2.0
This is a 🤗 Transformers model card hosted on Hugging Face Hub. The content is automatically generated.
Large Language Model Transformers
D
p1atdev
71
1
Parrots Chinese Hubert Base
Apache-2.0
The Chinese HuBERT base model is a pre-trained model for text-to-speech tasks, supporting Chinese speech processing.
Speech Synthesis Transformers Chinese
P
shibing624
35
1
Parrots Chinese Roberta Wwm Ext Large
Apache-2.0
Chinese pre-trained model based on RoBERTa architecture, supporting text-to-speech tasks
Large Language Model Transformers Chinese
P
shibing624
76
2
Dinov2 Base Xray 224
Apache-2.0
The AIMI Foundation Model Suite is a collection of foundation models for the radiology domain developed by the Stanford AIMI team, focusing on medical image analysis tasks.
Image Classification Transformers
D
StanfordAIMI
32.11k
2
Ocrmnist
Apache-2.0
An optical character recognition model based on Hugging Face Transformers, specifically designed for recognizing MNIST-style digit images
Text Recognition Transformers English
O
vanshp123
16
0
Wav2vec2 Base 960h
ONNX format conversion of Facebook's wav2vec2-base-960h model, designed for Transformers.js, supporting browser-side speech recognition
Speech Recognition Transformers
W
Xenova
117
3
Reastap Large Finetuned Wtq
ReasTAP is a pre-trained model for table reasoning, which injects table reasoning skills through synthetic reasoning examples and is fine-tuned on the WikiTableQuestions dataset
Question Answering System Transformers English
R
Yale-LILY
66
2
Swinv2 Chaoyang
Apache-2.0
This is a visual image classification model trained on the ImageNet-1k dataset, capable of recognizing various common objects and scenes.
Image Classification Transformers
S
Snarci
14
0
Speecht5 Tts
MIT
A SpeechT5 speech synthesis (text-to-speech) model fine-tuned on the LibriTTS dataset, supporting high-quality text-to-speech conversion.
Speech Synthesis Transformers
S
microsoft
113.83k
760
Ivenpeople
Apache-2.0
A general-purpose image classification model trained on the ImageNet-1k dataset, capable of recognizing various common objects and scenes.
Image Classification Transformers
I
jctivensa
20
0
Codereviewer
Apache-2.0
CodeReviewer is a pre-trained model using code changes and code review data, designed to support code review tasks.
Large Language Model Transformers Other
C
microsoft
1,169
137
Codet5 Large
Bsd-3-clause
CodeT5 is an identifier-aware unified pre-trained encoder-decoder model focused on code understanding and generation tasks.
Large Language Model Transformers
C
Salesforce
3,796
70
Vinai Translate Vi2en
The VinAI translation model is a pre-trained neural machine translation system for Vietnamese-English and English-Vietnamese translation tasks, currently the most advanced text translation model.
Machine Translation Transformers Supports Multiple Languages
V
vinai
197
8
Vit Base Patch16 224
Apache-2.0
Image classification model based on Transformer architecture, pre-trained and fine-tuned on ImageNet-21k and ImageNet-1k datasets
Image Classification Transformers
V
optimum
40
0
Densenet121 Res224 Rsna
Apache-2.0
A convolutional neural network based on the DenseNet architecture, specifically designed for X-ray image classification tasks, achieving dense inter-layer connections through dense blocks.
Image Classification Transformers
D
torchxrayvision
16
0
Efficientnet 61 Planet Detection
Apache-2.0
EfficientNetV2 is a highly efficient convolutional neural network architecture, specially optimized for training speed and parameter efficiency. The 61-channel version is a variant of this architecture.
Image Classification Transformers
E
chlab
14
0
Randeng Pegasus 238M Chinese
Chinese version of the PAGASUS-base model specialized in text summarization tasks
Text Generation Transformers Chinese
R
IDEA-CCNL
104
4
Indobert Base Uncased
MIT
IndoBERT is a BERT model specifically optimized for Indonesian, excelling in multiple Indonesian NLP tasks.
Large Language Model Other
I
indolem
26.35k
42
Kobert Base V1
KoBERT is a BERT model specifically optimized for Korean, developed by SKT Brain and trained on Korean corpora.
Large Language Model Transformers
K
skt
92.83k
29
Kobart Summarization
MIT
A Korean text summarization model based on the KoBART architecture, capable of generating concise summaries of Korean news articles.
Text Generation Transformers Korean
K
gogamza
119.18k
12
Tf Camembert Base
Advanced French language model based on the RoBERTa architecture, compatible with the TensorFlow framework
Large Language Model Transformers
T
jplu
1,942
0
Bert Base Spanish Wwm Uncased
BETO is a BERT model trained on a large Spanish corpus, supporting both cased and uncased versions, suitable for various Spanish NLP tasks.
Large Language Model Spanish
B
dccuchile
231.26k
65
Sew Tiny 100k Ft Ls100h
Apache-2.0
SEW (Squeezed and Efficient Wav2vec) is a speech recognition pre-trained model developed by ASAPP Research, outperforming wav2vec 2.0 in both performance and efficiency.
Speech Recognition Transformers Supports Multiple Languages
S
asapp
736
1
Roberta Small
A compact Korean pre-trained RoBERTa model developed by the KLUE benchmark team
Large Language Model Transformers Korean
R
klue
3,362
12
Bert Large Arabertv2
AraBERT is a pre-trained language model based on Google's BERT architecture, specifically designed for Arabic natural language understanding tasks.
Large Language Model Arabic
B
aubmindlab
334
11
Trocr Small Stage1
TrOCR is a Transformer-based pre-trained optical character recognition model that adopts an encoder-decoder architecture, suitable for OCR tasks on single-line text images.
Image-to-Text Transformers
T
microsoft
3,713
12
Chinese Bigbird Mini 1024
Apache-2.0
This is a Chinese pre-trained model based on the BigBird architecture, optimized for Chinese text processing and supporting long text sequence handling.
Large Language Model Transformers Chinese
C
Lowin
14
1
Bert Base Dutch Cased
BERTje is a Dutch pre-trained BERT model developed by the University of Groningen, specifically optimized for Dutch language.
Large Language Model Other
B
GroNLP
51.97k
30
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase