🚀 Parler-TTS Mini v1.1
Parler-TTS Mini v1.1 is a lightweight text-to-speech (TTS) model. It's trained on 45K hours of audio data and can generate high - quality, natural - sounding speech. You can control its features like gender, background noise, speaking rate, pitch, and reverberation using a simple text prompt.
🚨 Parler-TTS Mini v1.1 is exactly the same model as Mini v1. It was trained on the same datasets with the same training configuration. The only change is the use of a better prompt tokenizer. This tokenizer has a larger vocabulary and handles byte fallback, which simplifies multilingual training. It's based on unsloth/llama-2-7b tokenizer. Thanks to the AI4Bharat team for providing advice and assistance in improving tokenization. 🚨
🚀 Quick Start
👨💻 Installation
Using Parler-TTS is as simple as "bonjour". Simply install the library once:
pip install git+https://github.com/huggingface/parler-tts.git
💻 Usage Examples
🎲 Basic Usage - Using a random voice
import torch
from parler_tts import ParlerTTSForConditionalGeneration
from transformers import AutoTokenizer
import soundfile as sf
device = "cuda:0" if torch.cuda.is_available() else "cpu"
model = ParlerTTSForConditionalGeneration.from_pretrained("parler-tts/parler-tts-mini-v1.1").to(device)
tokenizer = AutoTokenizer.from_pretrained("parler-tts/parler-tts-mini-v1.1")
description_tokenizer = AutoTokenizer.from_pretrained(model.config.text_encoder._name_or_path)
prompt = "Hey, how are you doing today?"
description = "A female speaker delivers a slightly expressive and animated speech with a moderate speed and pitch. The recording is of very high quality, with the speaker's voice sounding clear and very close up."
input_ids = description_tokenizer(description, return_tensors="pt").input_ids.to(device)
prompt_input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device)
generation = model.generate(input_ids=input_ids, prompt_input_ids=prompt_input_ids)
audio_arr = generation.cpu().numpy().squeeze()
sf.write("parler_tts_out.wav", audio_arr, model.config.sampling_rate)
🎯 Advanced Usage - Using a specific speaker
To ensure speaker consistency across generations, this checkpoint was also trained on 34 speakers, characterized by name (e.g. Jon, Lea, Gary, Jenna, Mike, Laura).
To take advantage of this, simply adapt your text description to specify which speaker to use: Jon's voice is monotone yet slightly fast in delivery, with a very close recording that almost has no background noise.
import torch
from parler_tts import ParlerTTSForConditionalGeneration
from transformers import AutoTokenizer
import soundfile as sf
device = "cuda:0" if torch.cuda.is_available() else "cpu"
model = ParlerTTSForConditionalGeneration.from_pretrained("parler-tts/parler-tts-mini-v1.1").to(device)
tokenizer = AutoTokenizer.from_pretrained("parler-tts/parler-tts-mini-v1.1")
description_tokenizer = AutoTokenizer.from_pretrained(model.config.text_encoder._name_or_path)
prompt = "Hey, how are you doing today?"
description = "Jon's voice is monotone yet slightly fast in delivery, with a very close recording that almost has no background noise."
input_ids = description_tokenizer(description, return_tensors="pt").input_ids.to(device)
prompt_input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device)
generation = model.generate(input_ids=input_ids, prompt_input_ids=prompt_input_ids)
audio_arr = generation.cpu().numpy().squeeze()
sf.write("parler_tts_out.wav", audio_arr, model.config.sampling_rate)
💡 Usage Tip
- We've set up an inference guide to make generation faster. Think SDPA, torch.compile, batching and streaming!
- Include the term "very clear audio" to generate the highest quality audio, and "very noisy audio" for high levels of background noise
- Punctuation can be used to control the prosody of the generations, e.g. use commas to add small breaks in speech
- The remaining speech features (gender, speaking rate, pitch and reverberation) can be controlled directly through the prompt
📚 Documentation
Motivation
Parler-TTS is a reproduction of work from the paper Natural language guidance of high-fidelity text-to-speech with synthetic annotations by Dan Lyth and Simon King, from Stability AI and Edinburgh University respectively.
Contrarily to other TTS models, Parler-TTS is a fully open - source release. All of the datasets, pre - processing, training code and weights are released publicly under permissive license, enabling the community to build on our work and develop their own powerful TTS models.
Parler-TTS was released alongside:
Citation
If you found this repository useful, please consider citing this work and also the original Stability AI paper:
@misc{lacombe-etal-2024-parler-tts,
author = {Yoach Lacombe and Vaibhav Srivastav and Sanchit Gandhi},
title = {Parler-TTS},
year = {2024},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/huggingface/parler-tts}}
}
@misc{lyth2024natural,
title={Natural language guidance of high-fidelity text-to-speech with synthetic annotations},
author={Dan Lyth and Simon King},
year={2024},
eprint={2402.01912},
archivePrefix={arXiv},
primaryClass={cs.SD}
}
📄 License
This model is permissively licensed under the Apache 2.0 license.
📦 Model Information
Property |
Details |
Model Type |
Text-to-Speech (TTS) |
Training Data |
parler-tts/mls_eng, parler-tts/libritts_r_filtered, parler-tts/libritts-r-filtered-speaker-descriptions, parler-tts/mls-eng-speaker-descriptions |