Model Overview
Model Features
Model Capabilities
Use Cases
đ Wav2Vec2-Large-XLSR-53-Marathi
This model is fine-tuned from facebook/wav2vec2-large-xlsr-53 on Marathi, aiming to provide high - quality automatic speech recognition for the Marathi language.
đ Quick Start
This model is a fine - tuned version of facebook/wav2vec2-large-xlsr-53 on Marathi, using the OpenSLR SLR64 dataset and InterSpeech 2021 Marathi datasets. Note that the OpenSLR data contains only female voices. Please keep this in mind before using the model for your task. When using this model, make sure that your speech input is sampled at 16kHz.
â ī¸ Important Note
The OpenSLR dataset used for fine - tuning contains only female voices. Consider this factor when applying the model to your specific task.
đĄ Usage Tip
Ensure that your speech input is sampled at 16kHz when using this model.
⨠Features
- Fine - Tuned on Marathi: Specifically optimized for Marathi language speech recognition.
- No Language Model Required: Can be used directly for speech recognition tasks.
đĻ Installation
No specific installation steps are provided in the original document, so this section is skipped.
đģ Usage Examples
Basic Usage
The model can be used directly (without a language model) as follows, assuming you have a dataset with Marathi text
and audio_path
fields:
import torch
import torchaudio
import librosa
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
# test_data = #TODO: WRITE YOUR CODE TO LOAD THE TEST DATASET. For sample see the Colab link in Training Section.
processor = Wav2Vec2Processor.from_pretrained("gchhablani/wav2vec2-large-xlsr-mr-3")
model = Wav2Vec2ForCTC.from_pretrained("gchhablani/wav2vec2-large-xlsr-mr-3")
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["audio_path"])
batch["speech"] = librosa.resample(speech_array[0].numpy(), sampling_rate, 16_000) # sampling_rate can vary
return batch
test_data= test_data.map(speech_file_to_array_fn)
inputs = processor(test_data["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_data["text"][:2])
Advanced Usage
The model can be evaluated as follows on 10% of the Marathi data on OpenSLR.
import torch
import torchaudio
import librosa
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
# test_data = #TODO: WRITE YOUR CODE TO LOAD THE TEST DATASET. For sample see the Colab link in Training Section.
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("gchhablani/wav2vec2-large-xlsr-mr-3")
model = Wav2Vec2ForCTC.from_pretrained("gchhablani/wav2vec2-large-xlsr-mr-3")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\â\%\â\â\īŋŊ\â\âĻ]'
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
batch["text"] = re.sub(chars_to_ignore_regex, '', batch["text"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["audio_path"])
batch["speech"] = librosa.resample(speech_array[0].numpy(), sampling_rate, 16_000)
return batch
test_data= test_data.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_data.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["text"])))
đ Documentation
Test Results
- Test Result: 19.05 % (157 + 157 examples)
- Test Result on OpenSLR test: 14.15 % (157 examples)
- Test Results on InterSpeech test: 27.14 % (157 examples)
Training Details
1412 examples of the OpenSLR Marathi dataset and 1412 examples of InterSpeech 2021 Marathi ASR dataset were used for training. For testing, 157 examples from each were used.
The colab notebook used for training and evaluation can be found here.
đ§ Technical Details
The model is based on the fine - tuning of facebook/wav2vec2-large-xlsr-53 on Marathi datasets. It uses the Connectionist Temporal Classification (CTC) loss function for training and can be used for end - to - end speech recognition tasks.
đ License
This model is licensed under the apache - 2.0 license.
đ Model Information
Property | Details |
---|---|
Model Type | Wav2Vec2 - Large - XLSR - 53 fine - tuned on Marathi |
Training Data | OpenSLR SLR64 Marathi dataset and InterSpeech 2021 Marathi ASR dataset |
Metrics | Word Error Rate (WER) |
Tags | audio, automatic - speech - recognition, speech, xlsr - fine - tuning - week |
Model Name | XLSR Wav2Vec2 Large 53 Marathi by Gunjan Chhablani |

