🚀 Wav2Vec2-Large-LV60 finetuned on multi-lingual Common Voice
This project leverages the pre - trained checkpoint wav2vec2-large-lv60 and fine - tunes it on CommonVoice to recognize phonetic labels in multiple languages. It offers a solution for multi - lingual phoneme recognition, enabling more accurate speech - related tasks across different languages.
✨ Features
- Multi - lingual Support: Recognize phonetic labels in multiple languages.
- Leverage Pretrained Model: Based on the powerful wav2vec2-large-lv60 checkpoint.
- Zero - shot Cross - lingual Transfer: As described in the related paper, it can transcribe unseen languages through fine - tuning.
📦 Installation
No specific installation steps are provided in the original document, so this section is skipped.
💻 Usage Examples
Basic Usage
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
from datasets import load_dataset
import torch
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-lv-60-espeak-cv-ft")
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-lv-60-espeak-cv-ft")
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
input_values = processor(ds[0]["audio"]["array"], return_tensors="pt").input_values
with torch.no_grad():
logits = model(input_values).logits
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
📚 Documentation
Important Notes
When using the model, make sure that your speech input is sampled at 16kHz. Note that the model outputs a string of phonetic labels. A dictionary mapping phonetic labels to words has to be used to map the phonetic output labels to output words.
Paper Reference
Paper: Simple and Effective Zero - shot Cross - lingual Phoneme Recognition
Authors: Qiantong Xu, Alexei Baevski, Michael Auli
Abstract
Recent progress in self - training, self - supervised pretraining and unsupervised learning enabled well - performing speech recognition systems without any labeled data. However, in many cases there is labeled data available for related languages which is not utilized by these methods. This paper extends previous work on zero - shot cross - lingual transfer learning by fine - tuning a multilingually pretrained wav2vec 2.0 model to transcribe unseen languages. This is done by mapping phonemes of the training languages to the target language using articulatory features. Experiments show that this simple method significantly outperforms prior work which introduced task - specific architectures and used only part of a monolingually pretrained model.
Original Model Source
The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec - 20.
🔧 Technical Details
No specific technical details (more than 50 - word specific technical descriptions) are provided in the original document, so this section is skipped.
📄 License
This project is licensed under the apache - 2.0 license.
Property |
Details |
Language |
Multilingual |
Datasets |
common_voice |
Tags |
speech, audio, automatic - speech - recognition, phoneme - recognition |
Widget Example 1 |
[Librispeech sample 1](https://cdn - media.huggingface.co/speech_samples/sample1.flac) |
Widget Example 2 |
[Librispeech sample 2](https://cdn - media.huggingface.co/speech_samples/sample2.flac) |