V

Vit Base Patch16 224 In21k Mobile Eye Tracking Dataset V0

Developed by julienmercier
A fine-tuned eye tracking image classification model based on Google Vision Transformer (ViT) architecture
Downloads 24
Release Time : 3/8/2023

Model Overview

This model is fine-tuned from Google's ViT-base-patch16-224-in21k pre-trained model, specifically designed for image classification tasks related to eye tracking. It demonstrates high accuracy (93.49%) and low validation loss (0.2002) on the evaluation set.

Model Features

High Accuracy
Achieves 93.49% classification accuracy on the evaluation set
ViT-based Architecture
Utilizes Vision Transformer architecture, suitable for processing image data
Transfer Learning
Fine-tuned from a pre-trained model, suitable for small-scale datasets

Model Capabilities

Image Classification
Eye Tracking Data Analysis

Use Cases

Human-Computer Interaction Research
Eye Tracking Experiment Analysis
Used for analyzing visual data collected in eye tracking experiments
93.49% classification accuracy
Psychological Research
Visual Attention Study
Analyzes the attention distribution of subjects during visual tasks
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
Š 2025AIbase