🚀 Massively Multilingual Speech (MMS): Lao Text-to-Speech
This repository provides a text-to-speech (TTS) model checkpoint specifically for the Lao (lao) language. It is part of Facebook's Massively Multilingual Speech project, which aims to offer speech technology across a wide range of languages. You can find more details about supported languages and their ISO 639-3 codes in the MMS Language Coverage Overview. All MMS-TTS checkpoints are available on the Hugging Face Hub: facebook/mms-tts. The MMS-TTS feature has been available in the 🤗 Transformers library since version 4.33.
✨ Features
- Multilingual Support: Part of a project aiming to cover over 1,000 languages.
- End - to - End Synthesis: Utilizes the VITS model for direct text - to - speech conversion.
- Stochastic Duration Prediction: Allows for different speech rhythms from the same input text.
📦 Installation
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards. To use this checkpoint, first install the latest version of the library:
pip install --upgrade transformers accelerate
💻 Usage Examples
Basic Usage
from transformers import VitsModel, AutoTokenizer
import torch
model = VitsModel.from_pretrained("facebook/mms-tts-lao")
tokenizer = AutoTokenizer.from_pretrained("facebook/mms-tts-lao")
text = "some example text in the Lao language"
inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
output = model(**inputs).waveform
Saving the Output as a .wav File
import scipy
scipy.io.wavfile.write("techno.wav", rate=model.config.sampling_rate, data=output)
Displaying in a Jupyter Notebook / Google Colab
from IPython.display import Audio
Audio(output, rate=model.config.sampling_rate)
🔧 Technical Details
VITS (Variational Inference with adversarial learning for end-to-end Text-to-Speech) is an end-to-end speech synthesis model. It predicts a speech waveform based on an input text sequence. It is a conditional variational autoencoder (VAE) consisting of a posterior encoder, decoder, and conditional prior.
A set of spectrogram-based acoustic features are predicted by the flow-based module, which includes a Transformer-based text encoder and multiple coupling layers. The spectrogram is decoded using a stack of transposed convolutional layers, similar to the HiFi-GAN vocoder. Given the one-to-many nature of the TTS problem (where the same text can be spoken in multiple ways), the model has a stochastic duration predictor. This allows the model to synthesize speech with different rhythms from the same input text.
The model is trained end-to-end using a combination of losses from variational lower bound and adversarial training. To enhance the model's expressiveness, normalizing flows are applied to the conditional prior distribution. During inference, text encodings are up-sampled based on the duration prediction module and then mapped to the waveform using a cascade of the flow module and HiFi-GAN decoder. Due to the stochastic nature of the duration predictor, the model is non-deterministic, so a fixed seed is needed to generate the same speech waveform.
For the MMS project, a separate VITS checkpoint is trained for each language.
📄 BibTex citation
This model was developed by Vineel Pratap et al. from Meta AI. If you use the model, consider citing the MMS paper:
@article{pratap2023mms,
title={Scaling Speech Technology to 1,000+ Languages},
author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli},
journal={arXiv},
year={2023}
}
📄 License
The model is licensed as CC-BY-NC 4.0.