Model Overview
Model Features
Model Capabilities
Use Cases
ЁЯЪА Wav2Vec2-Large-XLSR-53-Hindi
This model is a fine - tuned version of facebook/wav2vec2-large-xlsr-53 on Hindi. It addresses the need for accurate Hindi automatic speech recognition by leveraging multiple well - balanced datasets, offering a reliable solution for speech - related tasks in the Hindi language.
Model Information
Property | Details |
---|---|
Model Type | Fine - tuned Wav2Vec2 - Large - XLSR - 53 for Hindi |
Training Data | Common Voice, Indic TTS - IITM, IIITH - Indic Speech Datasets |
Results
The model has been evaluated on the following tasks and datasets:
- Task: Speech Recognition (Automatic Speech Recognition)
- Datasets:
- Common Voice hi: WER of 56.46%
- Custom Dataset (from 20% of Indic, IIITH and CV test): WER of 17.23%
License
This model is released under the Apache - 2.0 license.
ЁЯЪА Quick Start
When using this model, make sure that your speech input is sampled at 16kHz.
ЁЯТ╗ Usage Examples
Basic Usage
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "hi", split="test")
processor = Wav2Vec2Processor.from_pretrained("skylord/wav2vec2-large-xlsr-hindi")
model = Wav2Vec2ForCTC.from_pretrained("skylord/wav2vec2-large-xlsr-hindi")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
Predictions
Good Predictions
Predictions | Reference |
---|---|
рдлрд┐рд░ рд╡реЛ рд╕реВрд░рдЬ рддрд╛рд░реЗ рдкрд╣рд╛рдб рдмрд╛рд░рд┐рд╢ рдкрджрдЫрдбрд╝ рджрд┐рди рд░рд╛рдд рд╢рд╛рдо рдирджреА рдмрд░реНрдлрд╝ рд╕рдореБрджреНрд░ рдзреБрдВрдз рд╣рд╡рд╛ рдХреБрдЫ рднреА рд╣реЛ рд╕рдХрддреА рд╣реИ | рдлрд┐рд░ рд╡реЛ рд╕реВрд░рдЬ рддрд╛рд░реЗ рдкрд╣рд╛рдбрд╝ рдмрд╛рд░рд┐рд╢ рдкрддрдЭрдбрд╝ рджрд┐рди рд░рд╛рдд рд╢рд╛рдо рдирджреА рдмрд░реНрдлрд╝ рд╕рдореБрджреНрд░ рдзреБрдВрдз рд╣рд╡рд╛ рдХреБрдЫ рднреА рд╣реЛ рд╕рдХрддреА рд╣реИ |
рдЗрд╕ рдХрд╛рд░рдг рдЬрдВрдЧрд▓ рдореЗрдВ рдмрдбреА рджреВрд░ рд╕реНрдерд┐рдд рд░рд╛рдШрд╡ рдХреЗ рдЖрд╢реНрд░рдо рдореЗрдВ рд▓реЛрдШ рдХрдо рдЖрдиреЗ рд▓рдЧреЗ рдФрд░ рдЕрдзрд┐рдХрд╛рдВрд╢ рднрдХреНрдд рд╕реБрдВрджрд░ рдХреЗ рдЖрд╢реНрд░рдо рдореЗрдВ рдЬрд╛рдиреЗ рд▓рдЧреЗ | рдЗрд╕ рдХрд╛рд░рдг рдЬрдВрдЧрд▓ рдореЗрдВ рдмрдбрд╝реА рджреВрд░ рд╕реНрдерд┐рдд рд░рд╛рдШрд╡ рдХреЗ рдЖрд╢реНрд░рдо рдореЗрдВ рд▓реЛрдЧ рдХрдо рдЖрдиреЗ рд▓рдЧреЗ рдФрд░ рдЕрдзрд┐рдХрд╛рдВрд╢ рднрдХреНрдд рд╕реБрдиреНрджрд░ рдХреЗ рдЖрд╢реНрд░рдо рдореЗрдВ рдЬрд╛рдиреЗ рд▓рдЧреЗ |
рдЕрдкрдиреЗ рдмрдЪрди рдХреЗ рдЕрдиреБрд╕рд╛рд░ рд╢реБрднрдореВрд░реНрдд рдкрд░ рдЕрдирдВрдд рджрдХреНрд╖рд┐рдгреА рдкрд░реНрд╡рдд рдЧрдпрд╛ рдФрд░ рдордВрддреНрд░реЛрдВ рдХрд╛ рдЬрдк рдХрд░рдХреЗ рд╕рд░реЛрд╡рд░ рдореЗрдВ рдЙрддрд░рд╛ | рдЕрдкрдиреЗ рдмрдЪрди рдХреЗ рдЕрдиреБрд╕рд╛рд░ рд╢реБрднрдореБрд╣реВрд░реНрдд рдкрд░ рдЕрдирдВрдд рджрдХреНрд╖рд┐рдгреА рдкрд░реНрд╡рдд рдЧрдпрд╛ рдФрд░ рдордВрддреНрд░реЛрдВ рдХрд╛ рдЬрдк рдХрд░рдХреЗ рд╕рд░реЛрд╡рд░ рдореЗрдВ рдЙрддрд░рд╛ |
Poor Predictions
Predictions | Reference |
---|---|
рд╡рд╕ рдЧрдирд┐рд▓ рд╕рд╛рдлрд╝ рд╣реИред | рдЙрд╕рдХрд╛ рджрд┐рд▓ рд╕рд╛рдлрд╝ рд╣реИред |
рдЪрд╛рдп рд╡рд╛ рдПрдХ рдХреБрдЫ рд▓реИрдВрдЧреЗ рд╣рдм | рдЪрд╛рдпрд╡рд╛рдп рдХреБрдЫ рд▓реЗрдВрдЧреЗ рдЖрдк |
рдЯреЙрдо рдЖрдзреЗ рд╣реИ рд╕реНрдХреВрд▓ рд╣реЗрдВ рд╣реИ | рдЯреЙрдо рдЕрднреА рднреА рд╕реНрдХреВрд▓ рдореЗрдВ рд╣реИ |
ЁЯУЪ Documentation
Evaluation
The model can be evaluated on the following two datasets:
- Custom dataset created from 20% of Indic, IIITH and CV (test): WER 17.xx%
- CommonVoice Hindi test dataset: WER 56.xx%
Links to the datasets are provided above. Train - test csv files are shared on the following gdrive links: a. IIITH train test b. Indic TTS train test
Update the audio_path as per your local file structure.
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
## Load the datasets
test_dataset = load_dataset("common_voice", "hi", split="test")
indic = load_dataset("csv", data_files= {'train':"/workspace/data/hi2/indic_train_full.csv",
"test": "/workspace/data/hi2/indic_test_full.csv"}, download_mode="force_redownload")
iiith = load_dataset("csv", data_files= {"train": "/workspace/data/hi2/iiit_hi_train.csv",
"test": "/workspace/data/hi2/iiit_hi_test.csv"}, download_mode="force_redownload")
## Pre-process datasets and concatenate to create test dataset
# Drop columns of common_voice
split = ['train', 'test', 'validation', 'other', 'invalidated']
for sp in split:
common_voice[sp] = common_voice[sp].remove_columns(['client_id', 'up_votes', 'down_votes', 'age', 'gender', 'accent', 'locale', 'segment'])
common_voice = common_voice.rename_column('path', 'audio_path')
common_voice = common_voice.rename_column('sentence', 'target_text')
train_dataset = datasets.concatenate_datasets([indic['train'], iiith['train'], common_voice['train']])
test_dataset = datasets.concatenate_datasets([indic['test'], iiith['test'], common_voice['test'], common_voice['validation']])
## Load model from HF hub
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("skylord/wav2vec2-large-xlsr-hindi")
model = Wav2Vec2ForCTC.from_pretrained("skylord/wav2vec2-large-xlsr-hindi")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\'\;\:\"\тАЬ\%\тАШ\тАЭ\я┐╜Utrnle\_]'
unicode_ignore_regex = r'[dceMaWpmFui\xa0\u200d]' # Some unwanted unicode chars
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["target_text"] = re.sub(chars_to_ignore_regex, '', batch["target_text"])
batch["target_text"] = re.sub(unicode_ignore_regex, '', batch["target_text"])
speech_array, sampling_rate = torchaudio.load(batch["audio_path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
Test Result on custom dataset: 17.23 %
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "hi", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("skylord/wav2vec2-large-xlsr-hindi")
model = Wav2Vec2ForCTC.from_pretrained("skylord/wav2vec2-large-xlsr-hindi")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\'\;\:\"\тАЬ\%\тАШ\тАЭ\я┐╜Utrnle\_]'
unicode_ignore_regex = r'[dceMaWpmFui\xa0\u200d]' # Some unwanted unicode chars
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).sub(unicode_ignore_regex, '', batch["sentence"])
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
Test Result on CommonVoice: 56.46 %
Training
The Common Voice train
and validation
datasets were used for training. The script used for training & wandb dashboard can be found here

