# On-device Optimization
Distil Small.en
MIT
Distil-Whisper is a distilled version of the Whisper model, 6x faster with 49% smaller size, achieving near 1% WER on out-of-distribution evaluation sets.
Speech Recognition
Transformers English

D
distil-whisper
33.51k
97
Wav2vec2 Base Superb Ks
Apache-2.0
SUPERB keyword spotting model based on wav2vec2-base, specifically designed for 16kHz speech
Speech Recognition
Transformers English

W
superb
5,820
15
Hubert Large Superb Ks
Apache-2.0
Keyword detection model based on Hubert-Large architecture, excelling in SUPERB benchmark tests
Speech Recognition
Transformers English

H
superb
78
0
Hubert Base Superb Ks
Apache-2.0
This model is a keyword spotting model based on the Hubert architecture, designed to classify speech segments into predefined keyword sets.
Audio Classification
Transformers English

H
superb
11.29k
8
Featured Recommended AI Models