🚀 Parler-TTS Mini v0.1
Parler-TTS Mini v0.1 是一款輕量級的文本轉語音(TTS)模型,它基於 10500 小時的音頻數據進行訓練,能夠生成高質量、自然流暢的語音。用戶可以通過簡單的文本提示(例如性別、背景噪音、語速、音高和混響)來控制語音的特徵。該模型是 Parler-TTS 項目的首個發佈版本,此項目旨在為社區提供 TTS 訓練資源和數據集預處理代碼。
🚀 快速開始
使用 Parler-TTS 非常簡單。首先,你需要安裝相關庫:
pip install git+https://github.com/huggingface/parler-tts.git
安裝完成後,你可以使用以下代碼片段進行推理:
import torch
from parler_tts import ParlerTTSForConditionalGeneration
from transformers import AutoTokenizer
import soundfile as sf
device = "cuda:0" if torch.cuda.is_available() else "cpu"
model = ParlerTTSForConditionalGeneration.from_pretrained("parler-tts/parler_tts_mini_v0.1").to(device)
tokenizer = AutoTokenizer.from_pretrained("parler-tts/parler_tts_mini_v0.1")
prompt = "Hey, how are you doing today?"
description = "A female speaker with a slightly low-pitched voice delivers her words quite expressively, in a very confined sounding environment with clear audio quality. She speaks very fast."
input_ids = tokenizer(description, return_tensors="pt").input_ids.to(device)
prompt_input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device)
generation = model.generate(input_ids=input_ids, prompt_input_ids=prompt_input_ids)
audio_arr = generation.cpu().numpy().squeeze()
sf.write("parler_tts_out.wav", audio_arr, model.config.sampling_rate)
✨ 主要特性
- 高質量語音生成:基於大量音頻數據訓練,能夠生成高質量、自然的語音。
- 特徵可控制:通過簡單的文本提示,用戶可以控制語音的性別、背景噪音、語速、音高和混響等特徵。
- 完全開源:所有數據集、預處理代碼、訓練代碼和模型權重均在寬鬆許可下公開發布,方便社區在此基礎上進行開發。
💻 使用示例
基礎用法
import torch
from parler_tts import ParlerTTSForConditionalGeneration
from transformers import AutoTokenizer
import soundfile as sf
device = "cuda:0" if torch.cuda.is_available() else "cpu"
model = ParlerTTSForConditionalGeneration.from_pretrained("parler-tts/parler_tts_mini_v0.1").to(device)
tokenizer = AutoTokenizer.from_pretrained("parler-tts/parler_tts_mini_v0.1")
prompt = "Hey, how are you doing today?"
description = "A female speaker with a slightly low-pitched voice delivers her words quite expressively, in a very confined sounding environment with clear audio quality. She speaks very fast."
input_ids = tokenizer(description, return_tensors="pt").input_ids.to(device)
prompt_input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device)
generation = model.generate(input_ids=input_ids, prompt_input_ids=prompt_input_ids)
audio_arr = generation.cpu().numpy().squeeze()
sf.write("parler_tts_out.wav", audio_arr, model.config.sampling_rate)
使用提示
⚠️ 重要提示
- 包含 "very clear audio" 可生成最高質量的音頻,包含 "very noisy audio" 可生成帶有高背景噪音的音頻。
- 可以使用標點符號來控制語音的韻律,例如使用逗號在語音中添加小停頓。
- 其他語音特徵(性別、語速、音高和混響)可以直接通過提示進行控制。
📚 詳細文檔
動機
Parler-TTS 是對 Dan Lyth(來自 Stability AI)和 Simon King(來自愛丁堡大學)發表的論文 Natural language guidance of high-fidelity text-to-speech with synthetic annotations 工作的復現。與其他 TTS 模型不同的是,Parler-TTS 是一個完全開源的版本。所有數據集、預處理代碼、訓練代碼和模型權重都在寬鬆許可下公開發布,社區可以在此基礎上構建自己強大的 TTS 模型。
相關資源
📄 引用
如果你覺得這個倉庫有用,請考慮引用此工作以及原始的 Stability AI 論文:
@misc{lacombe-etal-2024-parler-tts,
author = {Yoach Lacombe and Vaibhav Srivastav and Sanchit Gandhi},
title = {Parler-TTS},
year = {2024},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/huggingface/parler-tts}}
}
@misc{lyth2024natural,
title={Natural language guidance of high-fidelity text-to-speech with synthetic annotations},
author={Dan Lyth and Simon King},
year={2024},
eprint={2402.01912},
archivePrefix={arXiv},
primaryClass={cs.SD}
}
📄 許可證
此模型在 Apache 2.0 許可證下發布。