đ Wav2Vec2-Base-960h
This is a base model pretrained and fine - tuned on 960 hours of Librispeech on 16kHz sampled speech audio, offering high - performance automatic speech recognition.
đ Quick Start
The base model is pretrained and fine - tuned on 960 hours of Librispeech on 16kHz sampled speech audio. When using the model, make sure that your speech input is also sampled at 16Khz.
The original model can be found under Facebook's Wav2Vec2.
Paper
Authors: Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli
Abstract
We show for the first time that learning powerful representations from speech audio alone followed by fine - tuning on transcribed speech can outperform the best semi - supervised methods while being conceptually simpler. wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a quantization of the latent representations which are jointly learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER on the clean/other test sets. When lowering the amount of labeled data to one hour, wav2vec 2.0 outperforms the previous state of the art on the 100 hour subset while using 100 times less labeled data. Using just ten minutes of labeled data and pre - training on 53k hours of unlabeled data still achieves 4.8/8.2 WER. This demonstrates the feasibility of speech recognition with limited amounts of labeled data.
⨠Features
- High - quality Pretraining: Trained on 960 hours of Librispeech data, ensuring accurate speech recognition.
- Sampling Requirement: Requires 16kHz sampled speech audio input, providing clear and consistent results.
đĻ Installation
No specific installation steps are provided in the original document.
đģ Usage Examples
Basic Usage
To transcribe audio files the model can be used as a standalone acoustic model as follows:
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
from datasets import load_dataset
import soundfile as sf
import torch
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-960h")
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-960h")
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
input_values = processor(ds[0]["audio"]["array"], return_tensors="pt", padding="longest").input_values
logits = model(input_values).logits
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
Advanced Usage
This code snippet shows how to evaluate facebook/wav2vec2-base-960h on LibriSpeech's "clean" and "other" test data.
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import torch
from jiwer import wer
librispeech_eval = load_dataset("librispeech_asr", "clean", split="test")
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-960h").to("cuda")
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-960h")
def map_to_pred(batch):
input_values = processor(batch["audio"]["array"], return_tensors="pt", padding="longest").input_values
with torch.no_grad():
logits = model(input_values.to("cuda")).logits
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
batch["transcription"] = transcription
return batch
result = librispeech_eval.map(map_to_pred, batched=True, batch_size=1, remove_columns=["audio"])
print("WER:", wer(result["text"], result["transcription"]))
Result (WER):
đ Documentation
Model Information
Property |
Details |
Model Type |
Wav2Vec2 - Base - 960h |
Training Data |
960 hours of Librispeech on 16kHz sampled speech audio |
Datasets |
librispeech_asr |
Tags |
audio, automatic - speech - recognition, hf - asr - leaderboard |
License |
apache - 2.0 |
Widget Examples
- Librispeech sample 1: Audio
- Librispeech sample 2: Audio
Model Index
- Name: wav2vec2 - base - 960h
- Results:
- Task: Automatic Speech Recognition
- Dataset: LibriSpeech (clean)
- Dataset: LibriSpeech (other)
đ License
This project is licensed under the apache - 2.0 license.