W

Wav2vec2 Large 10min Lv60 Self

Developed by Splend1dchan
This model is a large-scale speech recognition model based on the Wav2Vec2 architecture, pre-trained and fine-tuned on 10 minutes of data from Libri-Light and Librispeech, using self-training objectives, suitable for 16kHz sampled speech audio.
Downloads 177
Release Time : 4/12/2022

Model Overview

Wav2Vec 2.0 is an automatic speech recognition (ASR) model that learns powerful representations from raw speech audio and is fine-tuned with transcribed speech, achieving efficient speech recognition with limited labeled data.

Model Features

Self-training Objective
The model is trained using self-training objectives, improving performance with limited labeled data.
Low-resource Speech Recognition
Using only 10 minutes of labeled data and 53k hours of unlabeled data for pre-training, it still achieves good speech recognition results.
Latent Space Masking
Masks speech inputs in latent space and addresses quantization issues of latent representations through contrastive tasks.

Model Capabilities

Speech recognition
Audio processing
Automatic speech-to-text

Use Cases

Speech transcription
Meeting minutes
Automatically transcribe meeting recordings into text records
Voice notes
Convert voice memos into searchable text
Assistive technology
Hearing assistance
Provide real-time speech-to-text services for people with hearing impairments
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase