๐ NVIDIA FastConformer-Hybrid Large (uz)
This model is designed for automatic speech recognition of Uzbek language. It can transcribe text in the Uzbek alphabet, handling both upper and lower cases along with spaces, commas, question marks, and dashes. It's a large - scale FastConformer Transducer - CTC model with around 115M parameters, trained using a hybrid approach with Transducer and CTC losses.
๐ Quick Start
To start using this model, you first need to install NVIDIA NeMo. It's recommended to install it after installing the latest Pytorch version.
pip install nemo_toolkit['all']
โจ Features
- Text Transcription: Capable of transcribing Uzbek text with various punctuation and case formats.
- Hybrid Model: Trained on both Transducer and CTC losses, enhancing performance.
- Large - Scale: A large - parameter model (around 115M) for better accuracy.
๐ฆ Installation
As mentioned in the quick start, you can install the necessary toolkit using the following command:
pip install nemo_toolkit['all']
๐ป Usage Examples
Basic Usage
Automatically instantiate the model:
import nemo.collections.asr as nemo_asr
asr_model = nemo_asr.models.EncDecHybridRNNTCTCBPEModel.from_pretrained(model_name="nvidia/stt_uz_fastconformer_hybrid_large_pc")
Transcribing a Single Audio File
output = asr_model.transcribe(['audio_file.wav'])
print(output[0].text)
Transcribing Multiple Audio Files
Using Transducer mode inference:
python [NEMO_GIT_FOLDER]/examples/asr/transcribe_speech.py
pretrained_name="nvidia/stt_uz_fastconformer_hybrid_large_pc"
audio_dir="<DIRECTORY CONTAINING AUDIO FILES>"
Using CTC mode inference:
python [NEMO_GIT_FOLDER]/examples/asr/transcribe_speech.py
pretrained_name="nvidia/stt_uz_fastconformer_hybrid_large_pc"
audio_dir="<DIRECTORY CONTAINING AUDIO FILES>"
decoder_type="ctc"
Input and Output
- Input: This model accepts 16000 Hz Mono - channel Audio (wav files).
- Output: It provides transcribed speech as a string for a given audio sample.
๐ Documentation
Model Architecture
FastConformer [1] is an optimized version of the Conformer model with 8x depthwise - separable convolutional downsampling. The model is trained in a multitask setup with a Transducer decoder loss. You may find more information on the details of FastConformer here: Fast - Conformer Model.
Training
The NeMo toolkit [3] was used for training the models for over several hundred epochs. These model are trained with this example script and this base config.
The tokenizers for these models were built using the text transcripts of the train set with this script.
Datasets
The model is trained on three composite datasets comprising of 1000 hours of Uzbek speech:
- MCV 17.0 Uzbek (~90 hrs)
- UzbekVoice (~900 hrs)
- Fleurs Uzbek (~10 hrs)
Performance
The performance of Automatic Speech Recognition models is measured using Word Error Rate. Since this dataset is trained on multiple domains and a much larger corpus, it will generally perform better at transcribing audio in general.
The following table summarizes the performance of the model with the Transducer decoder. Performances of the ASR models are reported in terms of Word Error Rate (WER%) with greedy decoding.
|
WER(%) |
WER wo CAP |
WER wo PUNCT |
WER wo PUNCT CAP |
FLEURS DEV (used as test) |
17.52 |
16.20 |
12.20 |
10.73 |
MCV TEST |
16.46 |
15.89 |
7.78 |
7.18 |
Limitations
The model is non - streaming and outputs the speech as a string without capitalization and punctuation. Since this model was trained on publicly available speech datasets, the performance of this model might degrade for speech which includes technical terms, or vernacular that the model has not been trained on.
NVIDIA Riva: Deployment
NVIDIA Riva, is an accelerated speech AI SDK deployable on - prem, in all clouds, multi - cloud, hybrid, on edge, and embedded.
Additionally, Riva provides:
- World - class out - of - the - box accuracy for the most common languages with model checkpoints trained on proprietary data with hundreds of thousands of GPU - compute hours
- Best in class accuracy with run - time word boosting (e.g., brand and product names) and customization of acoustic model, language model, and inverse text normalization
- Streaming speech recognition, Kubernetes compatible scaling, and enterprise - grade support
Although this model isnโt supported yet by Riva, the list of supported models is here.
Check out Riva live demo.
๐ง Technical Details
The model is a "large" version of FastConformer Transducer - CTC, with around 115M parameters. It uses a hybrid training approach with Transducer and CTC losses. The FastConformer architecture is an optimized version of the Conformer model with 8x depthwise - separable convolutional downsampling.
๐ License
License to use this model is covered by the CC - BY - 4.0. By downloading the public and release version of the model, you accept the terms and conditions of the CC - BY - 4.0 license.
References
[1] Fast Conformer with Linearly Scalable Attention for Efficient Speech Recognition
[2] Google Sentencepiece Tokenizer
[3] NVIDIA NeMo Toolkit