🚀 Wav2Vec2-Large-XLSR-53-rw
This model is fine-tuned from facebook/wav2vec2-large-xlsr-53 on Kinyarwanda, aiming to solve the problem of automatic speech recognition in Kinyarwanda and provide more accurate speech recognition results.
✨ Features
- Fine-tuned on Kinyarwanda using the Common Voice dataset, with about 25% of the training data (limited to utterances without downvotes and shorter than 9.5 seconds).
- Validated on 2048 utterances from the validation set.
- Attempts to predict the apostrophes that mark contractions of pronouns with vowel-initial words, but may overgeneralize.
📦 Installation
No specific installation steps are provided in the original document, so this section is skipped.
💻 Usage Examples
Basic Usage
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "rw", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("lucio/wav2vec2-large-xlsr-kinyarwanda")
model = Wav2Vec2ForCTC.from_pretrained("lucio/wav2vec2-large-xlsr-kinyarwanda")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
Advanced Usage
The advanced usage can be considered as the evaluation process.
import jiwer
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
import unidecode
test_dataset = load_dataset("common_voice", "rw", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("lucio/wav2vec2-large-xlsr-kinyarwanda-apostrophied")
model = Wav2Vec2ForCTC.from_pretrained("lucio/wav2vec2-large-xlsr-kinyarwanda-apostrophied")
model.to("cuda")
chars_to_ignore_regex = r'[!"#$%&()*+,./:;<=>?@\[\]\\_{}|~£¤¨©ª«¬®¯°·¸»¼½¾ðʺ˜˝ˮ‐–—―‚“”„‟•…″‽₋€™−√�]'
def remove_special_characters(batch):
batch["text"] = re.sub(r'[ʻʽʼ‘’´`]', r"'", batch["sentence"])
batch["text"] = re.sub(chars_to_ignore_regex, "", batch["text"]).lower().strip()
batch["text"] = re.sub(r"([b-df-hj-np-tv-z])' ([aeiou])", r"\1'\2", batch["text"])
batch["text"] = re.sub(r"(-| '|' | +)", " ", batch["text"])
batch["text"] = unidecode.unidecode(batch["text"])
return batch
resampler = torchaudio.transforms.Resample(48_000, 16_000)
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
batch["sampling_rate"] = 16_000
return batch
def cv_prepare(batch):
batch = remove_special_characters(batch)
batch = speech_file_to_array_fn(batch)
return batch
test_dataset = test_dataset.map(cv_prepare)
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
def chunked_wer(targets, predictions, chunk_size=None):
if chunk_size is None: return jiwer.wer(targets, predictions)
start = 0
end = chunk_size
H, S, D, I = 0, 0, 0, 0
while start < len(targets):
chunk_metrics = jiwer.compute_measures(targets[start:end], predictions[start:end])
H = H + chunk_metrics["hits"]
S = S + chunk_metrics["substitutions"]
D = D + chunk_metrics["deletions"]
I = I + chunk_metrics["insertions"]
start += chunk_size
end += chunk_size
return float(S + D + I) / float(H + S + D)
print("WER: {:2f}".format(100 * chunked_wer(result["sentence"], result["pred_strings"], chunk_size=4000)))
📚 Documentation
Model Information
Property |
Details |
Model Type |
XLSR Wav2Vec2 Large Kinyarwanda with apostrophes |
Training Data |
Common Voice rw dataset, about 25% of the available data (filtered utterances without downvotes and shorter than 9.5 seconds) |
Evaluation Metric |
Word Error Rate (WER) |
Test Result |
39.92% |
Training Details
Examples from the Common Voice training dataset were used for training, after filtering out utterances that had any down_vote
or were longer than 9.5 seconds. The data used totals about 125k examples, 25% of the available data, trained on 1 V100 GPU provided by OVHcloud, for a total of about 60 hours: 20 epochs on one block of 32k examples and then 10 epochs each on 3 more blocks of 32k examples. For validation, 2048 examples of the validation dataset were used.
The script used for training is adapted from the example script provided in the transformers repo.
📄 License
This project is licensed under the Apache-2.0 license.