🚀 Parler-TTS Tiny v1
Parler-TTS Tiny v1 is a super lightweight text-to-speech (TTS) model. It's trained on 45K hours of audio data and can generate high - quality, natural - sounding speech. You can control its features like gender, background noise, speaking rate, pitch, and reverberation using a simple text prompt.
This is the second set of models published as part of the Parler - TTS project. Along with Parler - TTS Mini v1 and Parler - TTS Large v1, the project aims to provide the community with TTS training resources and dataset pre - processing code.
🚀 Quick Start
👨💻 Installation
Using Parler - TTS is as simple as "bonjour". Simply install the library once:
pip install git+https://github.com/huggingface/parler-tts.git
🎲 Usage Examples
Basic Usage
Parler - TTS can generate speech with controllable features using a simple text prompt. Here is an example:
import torch
from parler_tts import ParlerTTSForConditionalGeneration
from transformers import AutoTokenizer
import soundfile as sf
device = "cuda:0" if torch.cuda.is_available() else "cpu"
model = ParlerTTSForConditionalGeneration.from_pretrained("parler-tts/parler-tts-tiny-v1").to(device)
tokenizer = AutoTokenizer.from_pretrained("parler-tts/parler-tts-tiny-v1")
prompt = "Hey, how are you doing today?"
description = "A female speaker delivers a slightly expressive and animated speech with a moderate speed and pitch. The recording is of very high quality, with the speaker's voice sounding clear and very close up."
input_ids = tokenizer(description, return_tensors="pt").input_ids.to(device)
prompt_input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device)
generation = model.generate(input_ids=input_ids, prompt_input_ids=prompt_input_ids)
audio_arr = generation.cpu().numpy().squeeze()
sf.write("parler_tts_out.wav", audio_arr, model.config.sampling_rate)
Advanced Usage
To ensure speaker consistency across generations, this checkpoint was trained on 34 speakers. You can specify a speaker in the text description. For example:
import torch
from parler_tts import ParlerTTSForConditionalGeneration
from transformers import AutoTokenizer
import soundfile as sf
device = "cuda:0" if torch.cuda.is_available() else "cpu"
model = ParlerTTSForConditionalGeneration.from_pretrained("parler-tts/parler-tts-tiny-v1").to(device)
tokenizer = AutoTokenizer.from_pretrained("parler-tts/parler-tts-tiny-v1")
prompt = "Hey, how are you doing today?"
description = "Jon's voice is monotone yet slightly fast in delivery, with a very close recording that almost has no background noise."
input_ids = tokenizer(description, return_tensors="pt").input_ids.to(device)
prompt_input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device)
generation = model.generate(input_ids=input_ids, prompt_input_ids=prompt_input_ids)
audio_arr = generation.cpu().numpy().squeeze()
sf.write("parler_tts_out.wav", audio_arr, model.config.sampling_rate)
⚠️ Important Note
- We've set up an inference guide to make generation faster. Think SDPA, torch.compile, batching and streaming!
- Include the term "very clear audio" to generate the highest quality audio, and "very noisy audio" for high levels of background noise.
- Punctuation can be used to control the prosody of the generations, e.g. use commas to add small breaks in speech.
- The remaining speech features (gender, speaking rate, pitch and reverberation) can be controlled directly through the prompt.
📚 Documentation
Motivation
Parler - TTS is a reproduction of work from the paper [Natural language guidance of high - fidelity text - to - speech with synthetic annotations](https://www.text - description - to - speech.com) by Dan Lyth and Simon King, from Stability AI and Edinburgh University respectively.
Contrarily to other TTS models, Parler - TTS is a fully open - source release. All of the datasets, pre - processing, training code and weights are released publicly under permissive license, enabling the community to build on our work and develop their own powerful TTS models.
Parler - TTS was released alongside:
Citation
If you found this repository useful, please consider citing this work and also the original Stability AI paper:
@misc{lacombe-etal-2024-parler-tts,
author = {Yoach Lacombe and Vaibhav Srivastav and Sanchit Gandhi},
title = {Parler-TTS},
year = {2024},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/huggingface/parler-tts}}
}
@misc{lyth2024natural,
title={Natural language guidance of high-fidelity text-to-speech with synthetic annotations},
author={Dan Lyth and Simon King},
year={2024},
eprint={2402.01912},
archivePrefix={arXiv},
primaryClass={cs.SD}
}
📄 License
This model is permissively licensed under the Apache 2.0 license.