🚀 Parler-TTS Mini v1 - Jenny
Parler-TTS Mini v1 - Jenny 是 Parler-TTS Mini v1 的微調版本,它基於 30 小時單說話人高質量 Jenny(她是愛爾蘭人 ☘️)數據集 進行微調,適合用於訓練文本轉語音(TTS)模型。其使用方法與 Parler-TTS v1 大致相同,只需在語音描述中指定關鍵詞 “Jenny” 即可。
🚀 快速開始
📦 安裝指南
你可以使用以下命令安裝所需庫:
pip install git+https://github.com/huggingface/parler-tts.git
💻 使用示例
基礎用法
安裝完成後,你可以使用以下代碼片段進行推理:
import torch
from parler_tts import ParlerTTSForConditionalGeneration
from transformers import AutoTokenizer
import soundfile as sf
device = "cuda:0" if torch.cuda.is_available() else "cpu"
model = ParlerTTSForConditionalGeneration.from_pretrained("parler-tts/parler-mini-v1-jenny").to(device)
tokenizer = AutoTokenizer.from_pretrained("parler-tts/parler-mini-v1-jenny")
prompt = "Hey, how are you doing today? My name is Jenny, and I'm here to help you with any questions you have."
description = "Jenny speaks at an average pace with an animated delivery in a very confined sounding environment with clear audio quality."
input_ids = tokenizer(description, return_tensors="pt").input_ids.to(device)
prompt_input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device)
generation = model.generate(input_ids=input_ids, prompt_input_ids=prompt_input_ids)
audio_arr = generation.cpu().numpy().squeeze()
sf.write("parler_tts_out.wav", audio_arr, model.config.sampling_rate)
📚 詳細文檔
📄 引用說明
如果你覺得這個倉庫很有用,請考慮引用以下工作以及原始的 Stability AI 論文:
@misc{lacombe-etal-2024-parler-tts,
author = {Yoach Lacombe and Vaibhav Srivastav and Sanchit Gandhi},
title = {Parler-TTS},
year = {2024},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/huggingface/parler-tts}}
}
@misc{lyth2024natural,
title={Natural language guidance of high-fidelity text-to-speech with synthetic annotations},
author={Dan Lyth and Simon King},
year={2024},
eprint={2402.01912},
archivePrefix={arXiv},
primaryClass={cs.SD}
}
📄 許可證
使用此數據集在軟件、網站、項目、界面(包括語音界面)中響應用戶操作生成音頻時,需要進行歸因。歸因意味著:語音必須被稱為 “Jenny”,並且在所有實際可行的情況下,稱為 “Jenny (Dioco)”。分發生成的音頻片段時不需要歸因(儘管歡迎這樣做)。允許商業使用。請勿做不公平的事情,例如聲稱該數據集是你自己的。沒有其他限制。