# Lightweight Architecture
Sam2 Hiera Small.fb R896
Apache-2.0
SAM2 model based on the HieraDet image encoder, focused on image feature extraction tasks.
Image Segmentation
Transformers

S
timm
142
0
Linknet Tu Resnet18
MIT
Linknet is a PyTorch-implemented image segmentation model suitable for semantic segmentation tasks.
Image Segmentation
L
smp-test-models
214
0
Chronos T5 Tiny
Apache-2.0
Chronos is a family of pretrained time series forecasting models based on language model architectures, trained by quantizing and scaling time series into token sequences.
Climate Model
Transformers

C
autogluon
318.45k
12
Hiera Huge 224 Hf
Hiera is an efficient hierarchical vision Transformer model that excels in image and video tasks with fast runtime
Image Classification
Transformers English

H
facebook
41
1
Hiera Large 224 Hf
Hiera is a hierarchical vision Transformer model that is fast, powerful, and concise, surpassing existing technologies in image and video tasks while being faster.
Image Classification
Transformers English

H
facebook
532
1
Hiera Base Plus 224 Hf
Hiera is a hierarchical vision Transformer model that is fast, powerful, and concise, surpassing state-of-the-art performance in a wide range of image and video tasks while significantly improving runtime speed.
Image Classification
Transformers English

H
facebook
15
0
Hiera Base 224 Hf
Hiera is a hierarchical vision Transformer model that is fast, powerful, and concise, excelling in image and video tasks.
Image Classification
Transformers English

H
facebook
163
0
Tiny Mistral
A randomly initialized model based on the Mistral architecture, suitable for end-to-end testing.
Large Language Model
Transformers

T
openaccess-ai-collective
23.43k
14
Efficientnet 61 Planet Detection
Apache-2.0
EfficientNetV2 is a highly efficient convolutional neural network architecture, specially optimized for training speed and parameter efficiency. The 61-channel version is a variant of this architecture.
Image Classification
Transformers

E
chlab
14
0
Levit 256
Apache-2.0
LeViT-256 is an efficient vision model based on Transformer architecture, designed for fast inference and pretrained on the ImageNet-1k dataset.
Image Classification
Transformers

L
facebook
37
0
Albert Large Arabic
Arabic pretrained version of ALBERT large model, trained on approximately 4.4 billion words of Arabic corpus
Large Language Model
Transformers Arabic

A
asafaya
45
1
Rexnet1 3x
Apache-2.0
ReXNet-1.3x is an image classification model based on the ReXNet architecture, pretrained on the ImageNette dataset. The model reduces channel redundancy by improving the Squeeze-Excitation layers in residual blocks.
Image Classification
Transformers

R
frgfm
15
0
Rexnet1 5x
Apache-2.0
ReXNet-1.5x is a lightweight image classification model pretrained on the ImageNette dataset, utilizing the ReXNet architecture. It reduces channel redundancy by improving the Squeeze-Excitation layers within residual blocks.
Image Classification
Transformers

R
frgfm
15
0
Cspdarknet53 Mish
Apache-2.0
Pre-trained CSP-Darknet-53 Mish architecture image classification model based on the ImageNette dataset
Image Classification
Transformers

C
frgfm
14
0
Featured Recommended AI Models