Distilhubert
DistilHuBERT is a lightweight speech representation learning model achieved through hierarchical distillation of the HuBERT model, significantly reducing model size and computational costs while maintaining performance.
Downloads 2,962
Release Time : 3/2/2022
Model Overview
A lightweight version of speech representation learning model obtained via hierarchical distillation of the HuBERT model within a multi-task learning framework, suitable for various speech processing tasks.
Model Features
Efficient Distillation
Reduces the HuBERT model size by 75% and improves speed by 73% through hierarchical distillation technology.
Multi-task Learning
Adopts a multi-task learning framework to directly distill hidden representations from the HuBERT model.
Low Resource Requirements
Requires minimal training time and data volume, making it suitable for personal and edge devices.
Performance Retention
Maintains most of the performance across ten different tasks.
Model Capabilities
Speech Representation Extraction
Speech Recognition (requires fine-tuning)
Support for Speech Processing Tasks
Use Cases
Speech Processing
Speech Recognition System
Can be fine-tuned to build a speech recognition system.
Maintains performance close to the original HuBERT.
Edge Device Speech Processing
Suitable for deployment on resource-limited edge devices for speech processing.
Small model size with high computational efficiency.
Featured Recommended AI Models
Š 2025AIbase