K

Kat Tiny Patch16 224.vitft

Developed by adamdad
KAT is a novel vision model that replaces the traditional Transformer's channel mixer with Grouped Rational Kolmogorov-Arnold Networks (GR-KAN), trained on the ImageNet-1k dataset.
Downloads 293
Release Time : 9/10/2024

Model Overview

This model is an image classification model based on the Kolmogorov-Arnold Transformer architecture, using Grouped Rational Kolmogorov-Arnold Networks to replace traditional Transformer components, trained at 224x224 resolution.

Model Features

GR-KAN Architecture
Uses Grouped Rational Kolmogorov-Arnold Networks to replace the channel mixer in traditional Transformers, potentially offering better feature extraction capabilities.
Efficient Image Processing
Supports image input at 224x224 resolution, suitable for medium-scale vision tasks.
Pre-trained Model
Pre-trained on the ImageNet-1k dataset and can be directly used for transfer learning.

Model Capabilities

Image Classification
Feature Extraction
Transfer Learning

Use Cases

Computer Vision
General Image Classification
Classify common objects and scenes.
Trained on the ImageNet-1k dataset, capable of classifying 1000 categories.
Visual Feature Extraction
Extract image features for downstream tasks.
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase