๐ Massively Multilingual Speech (MMS): Chinese, Min Nan Text-to-Speech
This repository provides a text-to-speech (TTS) model checkpoint for the Chinese, Min Nan (nan) language, which is part of Facebook's Massively Multilingual Speech project aiming to offer speech technology across diverse languages.
๐ Quick Start
MMS-TTS has been available in the ๐ค Transformers library since version 4.33. To use this checkpoint, you need to install the latest version of the library first:
pip install --upgrade transformers accelerate
Then, you can run inference with the following code snippet:
from transformers import VitsModel, AutoTokenizer
import torch
model = VitsModel.from_pretrained("facebook/mms-tts-nan")
tokenizer = AutoTokenizer.from_pretrained("facebook/mms-tts-nan")
text = "some example text in the Chinese, Min Nan language"
inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
output = model(**inputs).waveform
The resulting waveform can be saved as a .wav
file:
import scipy
scipy.io.wavfile.write("techno.wav", rate=model.config.sampling_rate, data=output)
Or displayed in a Jupyter Notebook / Google Colab:
from IPython.display import Audio
Audio(output, rate=model.config.sampling_rate)
โจ Features
- Multilingual Support: Part of the MMS project, aiming to provide speech technology for a wide range of languages.
- Advanced Model Architecture: Based on the VITS end - to - end speech synthesis model, which can generate speech with different rhythms from the same input text.
๐ Documentation
Model Details
VITS (Variational Inference with adversarial learning for end - to - end Text - to - Speech) is an end - to - end speech synthesis model that predicts a speech waveform conditional on an input text sequence. It is a conditional variational autoencoder (VAE) comprised of a posterior encoder, decoder, and conditional prior.
A set of spectrogram - based acoustic features are predicted by the flow - based module, which is formed of a Transformer - based text encoder and multiple coupling layers. The spectrogram is decoded using a stack of transposed convolutional layers, much in the same style as the HiFi - GAN vocoder. Motivated by the one - to - many nature of the TTS problem, where the same text input can be spoken in multiple ways, the model also includes a stochastic duration predictor, which allows the model to synthesise speech with different rhythms from the same input text.
The model is trained end - to - end with a combination of losses derived from variational lower bound and adversarial training. To improve the expressiveness of the model, normalizing flows are applied to the conditional prior distribution. During inference, the text encodings are up - sampled based on the duration prediction module, and then mapped into the waveform using a cascade of the flow module and HiFi - GAN decoder. Due to the stochastic nature of the duration predictor, the model is non - deterministic, and thus requires a fixed seed to generate the same speech waveform.
For the MMS project, a separate VITS checkpoint is trained on each language.
๐ป Usage Examples
Basic Usage
from transformers import VitsModel, AutoTokenizer
import torch
model = VitsModel.from_pretrained("facebook/mms-tts-nan")
tokenizer = AutoTokenizer.from_pretrained("facebook/mms-tts-nan")
text = "some example text in the Chinese, Min Nan language"
inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
output = model(**inputs).waveform
Advanced Usage
import scipy
from IPython.display import Audio
from transformers import VitsModel, AutoTokenizer
import torch
model = VitsModel.from_pretrained("facebook/mms-tts-nan")
tokenizer = AutoTokenizer.from_pretrained("facebook/mms-tts-nan")
text = "some example text in the Chinese, Min Nan language"
inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
output = model(**inputs).waveform
scipy.io.wavfile.write("techno.wav", rate=model.config.sampling_rate, data=output)
Audio(output, rate=model.config.sampling_rate)
๐ License
The model is licensed as CC - BY - NC 4.0.
๐ง Technical Details
Model Architecture
The VITS model is a conditional variational autoencoder (VAE) composed of a posterior encoder, decoder, and conditional prior. It uses a Transformer - based text encoder and multiple coupling layers in the flow - based module to predict spectrogram - based acoustic features. The spectrogram is decoded using transposed convolutional layers similar to the HiFi - GAN vocoder.
Training
The model is trained end - to - end with a combination of losses from variational lower bound and adversarial training. Normalizing flows are applied to the conditional prior distribution to improve the model's expressiveness.
Inference
During inference, text encodings are up - sampled based on the duration prediction module and then mapped to the waveform using the flow module and HiFi - GAN decoder. Due to the stochastic nature of the duration predictor, a fixed seed is required to generate the same speech waveform.
๐ BibTex citation
This model was developed by Vineel Pratap et al. from Meta AI. If you use the model, consider citing the MMS paper:
@article{pratap2023mms,
title={Scaling Speech Technology to 1,000+ Languages},
author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli},
journal={arXiv},
year={2023}
}