🚀 Massively Multilingual Speech (MMS): Kazakh Text-to-Speech
This repository offers a text-to-speech (TTS) model checkpoint for the Kazakh (kaz) language. It's part of Facebook's Massively Multilingual Speech project, aiming to bring speech technology to a wide range of languages. You can find details about supported languages and their ISO 639-3 codes in the MMS Language Coverage Overview. All MMS-TTS checkpoints are available on the Hugging Face Hub at facebook/mms-tts. The MMS-TTS feature has been available in the 🤗 Transformers library since version 4.33.
✨ Features
- Multilingual Support: Part of a project that aims to scale speech technology to over 1000 languages.
- Stochastic Synthesis: The VITS model can generate speech with different rhythms from the same input text.
- End - to - End Training: Trained end - to - end with a combination of variational and adversarial losses.
📦 Installation
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards. To use this checkpoint, first install the latest version of the library:
pip install --upgrade transformers accelerate
💻 Usage Examples
Basic Usage
from transformers import VitsModel, AutoTokenizer
import torch
model = VitsModel.from_pretrained("facebook/mms-tts-kaz")
tokenizer = AutoTokenizer.from_pretrained("facebook/mms-tts-kaz")
text = "some example text in the Kazakh language"
inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
output = model(**inputs).waveform
Advanced Usage
Save the resulting waveform as a .wav
file:
import scipy
scipy.io.wavfile.write("techno.wav", rate=model.config.sampling_rate, data=output)
Or display it in a Jupyter Notebook / Google Colab:
from IPython.display import Audio
Audio(output, rate=model.config.sampling_rate)
📚 Documentation
Model Details
VITS (Variational Inference with adversarial learning for end-to-end Text-to-Speech) is an end-to-end speech synthesis model. It predicts a speech waveform based on an input text sequence. It's a conditional variational autoencoder (VAE) with a posterior encoder, decoder, and conditional prior.
A flow - based module, consisting of a Transformer - based text encoder and multiple coupling layers, predicts a set of spectrogram - based acoustic features. The spectrogram is decoded using transposed convolutional layers, similar to the HiFi - GAN vocoder. Given the one - to - many nature of the TTS problem (same text can be spoken in multiple ways), the model has a stochastic duration predictor, enabling it to synthesize speech with different rhythms from the same input text.
The model is trained end - to - end using losses from variational lower bound and adversarial training. Normalizing flows are applied to the conditional prior distribution to enhance the model's expressiveness. During inference, text encodings are up - sampled according to the duration prediction module and then mapped to the waveform via a cascade of the flow module and HiFi - GAN decoder. Due to the stochastic nature of the duration predictor, the model is non - deterministic and requires a fixed seed to generate the same speech waveform.
For the MMS project, a separate VITS checkpoint is trained for each language.
BibTex citation
This model was developed by Vineel Pratap et al. from Meta AI. If you use the model, consider citing the MMS paper:
@article{pratap2023mms,
title={Scaling Speech Technology to 1,000+ Languages},
author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli},
journal={arXiv},
year={2023}
}
📄 License
The model is licensed as CC - BY - NC 4.0.