模型概述
模型特點
模型能力
使用案例
🚀 CosyVoice
CosyVoice 是一個文本轉語音的工具庫,支持多語言零樣本、跨語言推理,提供流式推理模式,可用於語音合成、語音轉換等多種場景。
🚀 快速開始
模型演示與文檔
關於 SenseVoice
,請訪問 SenseVoice 倉庫 和 SenseVoice 空間。
路線圖
- 2024/12
- [x] CosyVoice2 - 0.5B 模型發佈
- [x] CosyVoice2 - 0.5B 流式推理且質量不下降
- 2024/07
- [x] 支持流匹配訓練
- [x] 當 ttsfrd 不可用時支持 WeTextProcessing
- [x] Fastapi 服務器和客戶端
- 2024/08
- [x] 支持重複感知採樣(RAS)推理以提高大語言模型穩定性
- [x] 支持流式推理模式,包括 kv 緩存和 sdpa 以優化即時因子
- 2024/09
- [x] 25hz CosyVoice 基礎模型
- [x] 25hz CosyVoice 語音轉換模型
- 待確定
- [ ] 支持 CosyVoice2 - 0.5B 雙流推理
- [ ] CosyVoice2 - 0.5B 訓練和微調方案
- [ ] 使用更多多語言數據訓練的 CosyVoice - 500M
- [ ] 更多...
📦 安裝指南
克隆並安裝
git clone --recursive https://github.com/FunAudioLLM/CosyVoice.git
# 如果由於網絡問題克隆子模塊失敗,請運行以下命令直到成功
cd CosyVoice
git submodule update --init --recursive
安裝 Conda:請參考 Conda 安裝文檔。 創建 Conda 環境:
conda create -n cosyvoice python=3.10
conda activate cosyvoice
# WeTextProcessing 需要 pynini,使用 conda 安裝以確保在所有平臺上都能執行
conda install -y -c conda-forge pynini==2.1.5
pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/ --trusted-host=mirrors.aliyun.com
# 如果遇到 sox 兼容性問題
# ubuntu
sudo apt-get install sox libsox-dev
# centos
sudo yum install sox sox-devel
模型下載
強烈建議下載預訓練的 CosyVoice - 300M
、CosyVoice - 300M - SFT
、CosyVoice - 300M - Instruct
模型和 CosyVoice - ttsfrd
資源。
如果您是該領域的專家,並且只對從頭開始訓練自己的 CosyVoice 模型感興趣,可以跳過此步驟。
Python SDK 下載
# SDK模型下載
from modelscope import snapshot_download
snapshot_download('iic/CosyVoice2-0.5B', local_dir='pretrained_models/CosyVoice2-0.5B')
snapshot_download('iic/CosyVoice-300M', local_dir='pretrained_models/CosyVoice-300M')
snapshot_download('iic/CosyVoice-300M-25Hz', local_dir='pretrained_models/CosyVoice-300M-25Hz')
snapshot_download('iic/CosyVoice-300M-SFT', local_dir='pretrained_models/CosyVoice-300M-SFT')
snapshot_download('iic/CosyVoice-300M-Instruct', local_dir='pretrained_models/CosyVoice-300M-Instruct')
snapshot_download('iic/CosyVoice-ttsfrd', local_dir='pretrained_models/CosyVoice-ttsfrd')
Git 下載
# git模型下載,請確保已安裝git lfs
mkdir -p pretrained_models
git clone https://www.modelscope.cn/iic/CosyVoice2-0.5B.git pretrained_models/CosyVoice2-0.5B
git clone https://www.modelscope.cn/iic/CosyVoice-300M.git pretrained_models/CosyVoice-300M
git clone https://www.modelscope.cn/iic/CosyVoice-300M-25Hz.git pretrained_models/CosyVoice-300M-25Hz
git clone https://www.modelscope.cn/iic/CosyVoice-300M-SFT.git pretrained_models/CosyVoice-300M-SFT
git clone https://www.modelscope.cn/iic/CosyVoice-300M-Instruct.git pretrained_models/CosyVoice-300M-Instruct
git clone https://www.modelscope.cn/iic/CosyVoice-ttsfrd.git pretrained_models/CosyVoice-ttsfrd
可選步驟:可以解壓 ttsfrd
資源並安裝 ttsfrd
包以獲得更好的文本規範化性能。
注意,此步驟不是必需的。如果不安裝 ttsfrd
包,默認將使用 WeTextProcessing。
cd pretrained_models/CosyVoice-ttsfrd/
unzip resource.zip -d .
pip install ttsfrd-0.3.6-cp38-cp38-linux_x86_64.whl
💻 使用示例
基礎用法
對於零樣本/跨語言推理,請使用 CosyVoice2 - 0.5B
或 CosyVoice - 300M
模型;對於 SFT 推理,請使用 CosyVoice - 300M - SFT
模型;對於指令推理,請使用 CosyVoice - 300M - Instruct
模型。強烈建議使用 CosyVoice2 - 0.5B
模型以獲得更好的流式性能。
首先,將 third_party/Matcha - TTS
添加到 PYTHONPATH
:
export PYTHONPATH=third_party/Matcha-TTS
from cosyvoice.cli.cosyvoice import CosyVoice, CosyVoice2
from cosyvoice.utils.file_utils import load_wav
import torchaudio
## cosyvoice2 usage
cosyvoice2 = CosyVoice('pretrained_models/CosyVoice-300M-SFT', load_jit=False, load_onnx=False, load_trt=False)
# sft usage
prompt_speech_16k = load_wav('zero_shot_prompt.wav', 16000)
for i, j in enumerate(cosyvoice2.inference_zero_shot('收到好友從遠方寄來的生日禮物,那份意外的驚喜與深深的祝福讓我心中充滿了甜蜜的快樂,笑容如花兒般綻放。', '希望你以後能夠做的比我還好呦。', prompt_speech_16k, stream=True)):
torchaudio.save('zero_shot_{}.wav'.format(i), j['tts_speech'], cosyvoice2.sample_rate)
## cosyvoice usage
cosyvoice = CosyVoice('pretrained_models/CosyVoice-300M-SFT', load_jit=True, load_onnx=False, fp16=True)
# sft usage
print(cosyvoice.list_avaliable_spks())
# change stream=True for chunk stream inference
for i, j in enumerate(cosyvoice.inference_sft('你好,我是通義生成式語音大模型,請問有什麼可以幫您的嗎?', '中文女', stream=False)):
torchaudio.save('sft_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)
cosyvoice = CosyVoice('pretrained_models/CosyVoice-300M-25Hz') # or change to pretrained_models/CosyVoice-300M for 50Hz inference
# zero_shot usage, <|zh|><|en|><|jp|><|yue|><|ko|> for Chinese/English/Japanese/Cantonese/Korean
prompt_speech_16k = load_wav('zero_shot_prompt.wav', 16000)
for i, j in enumerate(cosyvoice.inference_zero_shot('收到好友從遠方寄來的生日禮物,那份意外的驚喜與深深的祝福讓我心中充滿了甜蜜的快樂,笑容如花兒般綻放。', '希望你以後能夠做的比我還好呦。', prompt_speech_16k, stream=False)):
torchaudio.save('zero_shot_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)
# cross_lingual usage
prompt_speech_16k = load_wav('cross_lingual_prompt.wav', 16000)
for i, j in enumerate(cosyvoice.inference_cross_lingual('<|en|>And then later on, fully acquiring that company. So keeping management in line, interest in line with the asset that\'s coming into the family is a reason why sometimes we don\'t buy the whole thing.', prompt_speech_16k, stream=False)):
torchaudio.save('cross_lingual_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)
# vc usage
prompt_speech_16k = load_wav('zero_shot_prompt.wav', 16000)
source_speech_16k = load_wav('cross_lingual_prompt.wav', 16000)
for i, j in enumerate(cosyvoice.inference_vc(source_speech_16k, prompt_speech_16k, stream=False)):
torchaudio.save('vc_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)
cosyvoice = CosyVoice('pretrained_models/CosyVoice-300M-Instruct')
# instruct usage, support <laughter></laughter><strong></strong>[laughter][breath]
for i, j in enumerate(cosyvoice.inference_instruct('在面對挑戰時,他展現了非凡的<strong>勇氣</strong>與<strong>智慧</strong>。', '中文男', 'Theo \'Crimson\', is a fiery, passionate rebel leader. Fights with fervor for justice, but struggles with impulsiveness.', stream=False)):
torchaudio.save('instruct_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)
啟動 Web 演示
可以使用 Web 演示頁面快速熟悉 CosyVoice。Web 演示支持 SFT/零樣本/跨語言/指令推理。
# change iic/CosyVoice-300M-SFT for sft inference, or iic/CosyVoice-300M-Instruct for instruct inference
python3 webui.py --port 50000 --model_dir pretrained_models/CosyVoice-300M
高級用法
對於高級用戶,在 examples/libritts/cosyvoice/run.sh
中提供了訓練和推理腳本,可以按照該腳本熟悉 CosyVoice。
構建部署
如果想使用 gRPC 進行服務部署,可以執行以下步驟,否則可以忽略此步驟。
cd runtime/python
docker build -t cosyvoice:v1.0 .
# change iic/CosyVoice-300M to iic/CosyVoice-300M-Instruct if you want to use instruct inference
# for grpc usage
docker run -d --runtime=nvidia -p 50000:50000 cosyvoice:v1.0 /bin/bash -c "cd /opt/CosyVoice/CosyVoice/runtime/python/grpc && python3 server.py --port 50000 --max_conc 4 --model_dir iic/CosyVoice-300M && sleep infinity"
cd grpc && python3 client.py --port 50000 --mode <sft|zero_shot|cross_lingual|instruct>
# for fastapi usage
docker run -d --runtime=nvidia -p 50000:50000 cosyvoice:v1.0 /bin/bash -c "cd /opt/CosyVoice/CosyVoice/runtime/python/fastapi && python3 server.py --port 50000 --model_dir iic/CosyVoice-300M && sleep infinity"
cd fastapi && python3 client.py --port 50000 --mode <sft|zero_shot|cross_lingual|instruct>
📚 詳細文檔
討論與交流
可以直接在 Github Issues 上進行討論,也可以掃描二維碼加入官方釘釘聊天群。
致謝
本項目借鑑了以下項目的很多代碼:
引用
@article{du2024cosyvoice,
title={Cosyvoice: A scalable multilingual zero-shot text-to-speech synthesizer based on supervised semantic tokens},
author={Du, Zhihao and Chen, Qian and Zhang, Shiliang and Hu, Kai and Lu, Heng and Yang, Yexin and Hu, Hangrui and Zheng, Siqi and Gu, Yue and Ma, Ziyang and others},
journal={arXiv preprint arXiv:2407.05407},
year={2024}
}
免責聲明
以上內容僅用於學術目的,旨在展示技術能力。部分示例來源於互聯網,如果任何內容侵犯了您的權益,請聯繫我們要求刪除。




