模型简介
模型特点
模型能力
使用案例
🚀 CosyVoice
CosyVoice是一个文本转语音的工具,提供了多种模型和资源,支持零样本、跨语言、SFT和指令推理等多种模式,还提供了Web演示和高级使用脚本。
🚀 快速开始
你可以通过以下链接查看CosyVoice的演示、论文、工作室和代码:
如果你想了解 SenseVoice
,请访问 SenseVoice repo 和 SenseVoice space。
📦 安装指南
克隆并安装
- 克隆仓库:
git clone --recursive https://github.com/FunAudioLLM/CosyVoice.git
# 如果由于网络问题克隆子模块失败,请运行以下命令直到成功
cd CosyVoice
git submodule update --init --recursive
- 安装Conda:请参考 https://docs.conda.io/en/latest/miniconda.html。
- 创建Conda环境:
conda create -n cosyvoice python=3.8
conda activate cosyvoice
# pynini是WeTextProcessing所需的,使用conda安装,因为它可以在所有平台上执行。
conda install -y -c conda-forge pynini==2.1.5
pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/ --trusted-host=mirrors.aliyun.com
# 如果你遇到sox兼容性问题
# ubuntu
sudo apt-get install sox libsox-dev
# centos
sudo yum install sox sox-devel
模型下载
强烈建议你下载预训练的 CosyVoice-300M
、CosyVoice-300M-SFT
、CosyVoice-300M-Instruct
模型和 CosyVoice-ttsfrd
资源。如果你是该领域的专家,并且只想从头开始训练自己的CosyVoice模型,可以跳过此步骤。
# SDK模型下载
from modelscope import snapshot_download
snapshot_download('iic/CosyVoice-300M', local_dir='pretrained_models/CosyVoice-300M')
snapshot_download('iic/CosyVoice-300M-SFT', local_dir='pretrained_models/CosyVoice-300M-SFT')
snapshot_download('iic/CosyVoice-300M-Instruct', local_dir='pretrained_models/CosyVoice-300M-Instruct')
snapshot_download('iic/CosyVoice-ttsfrd', local_dir='pretrained_models/CosyVoice-ttsfrd')
# git模型下载,请确保已安装git lfs
mkdir -p pretrained_models
git clone https://www.modelscope.cn/iic/CosyVoice-300M.git pretrained_models/CosyVoice-300M
git clone https://www.modelscope.cn/iic/CosyVoice-300M-SFT.git pretrained_models/CosyVoice-300M-SFT
git clone https://www.modelscope.cn/iic/CosyVoice-300M-Instruct.git pretrained_models/CosyVoice-300M-Instruct
git clone https://www.modelscope.cn/iic/CosyVoice-ttsfrd.git pretrained_models/CosyVoice-ttsfrd
你可以选择解压 ttsfrd
资源并安装 ttsfrd
包,以获得更好的文本规范化性能。注意,此步骤不是必需的。如果你不安装 ttsfrd
包,我们将默认使用WeTextProcessing。
cd pretrained_models/CosyVoice-ttsfrd/
unzip resource.zip -d .
pip install ttsfrd-0.3.6-cp38-cp38-linux_x86_64.whl
💻 使用示例
基础用法
- 零样本/跨语言推理,请使用
CosyVoice-300M
模型。 - SFT推理,请使用
CosyVoice-300M-SFT
模型。 - 指令推理,请使用
CosyVoice-300M-Instruct
模型。
首先,将 third_party/Matcha-TTS
添加到你的 PYTHONPATH
中。
export PYTHONPATH=third_party/Matcha-TTS
from cosyvoice.cli.cosyvoice import CosyVoice
from cosyvoice.utils.file_utils import load_wav
import torchaudio
cosyvoice = CosyVoice('pretrained_models/CosyVoice-300M-SFT')
# sft usage
print(cosyvoice.list_avaliable_spks())
# change stream=True for chunk stream inference
for i, j in enumerate(cosyvoice.inference_sft('你好,我是通义生成式语音大模型,请问有什么可以帮您的吗?', '中文女', stream=False)):
torchaudio.save('sft_{}.wav'.format(i), j['tts_speech'], 22050)
cosyvoice = CosyVoice('pretrained_models/CosyVoice-300M')
# zero_shot usage, <|zh|><|en|><|jp|><|yue|><|ko|> for Chinese/English/Japanese/Cantonese/Korean
prompt_speech_16k = load_wav('zero_shot_prompt.wav', 16000)
for i, j in enumerate(cosyvoice.inference_zero_shot('收到好友从远方寄来的生日礼物,那份意外的惊喜与深深的祝福让我心中充满了甜蜜的快乐,笑容如花儿般绽放。', '希望你以后能够做的比我还好呦。', prompt_speech_16k, stream=False)):
torchaudio.save('zero_shot_{}.wav'.format(i), j['tts_speech'], 22050)
# cross_lingual usage
prompt_speech_16k = load_wav('cross_lingual_prompt.wav', 16000)
for i, j in enumerate(cosyvoice.inference_cross_lingual('<|en|>And then later on, fully acquiring that company. So keeping management in line, interest in line with the asset that\'s coming into the family is a reason why sometimes we don\'t buy the whole thing.', prompt_speech_16k, stream=False)):
torchaudio.save('cross_lingual_{}.wav'.format(i), j['tts_speech'], 22050)
cosyvoice = CosyVoice('pretrained_models/CosyVoice-300M-Instruct')
# instruct usage, support <laughter></laughter><strong></strong>[laughter][breath]
for i, j in enumerate(cosyvoice.inference_instruct('在面对挑战时,他展现了非凡的<strong>勇气</strong>与<strong>智慧</strong>。', '中文男', 'Theo \'Crimson\', is a fiery, passionate rebel leader. Fights with fervor for justice, but struggles with impulsiveness.', stream=False)):
torchaudio.save('instruct_{}.wav'.format(i), j['tts_speech'], 22050)
启动Web演示
你可以使用我们的Web演示页面快速熟悉CosyVoice。我们在Web演示中支持SFT、零样本、跨语言和指令推理。详情请查看演示网站。
# 更改iic/CosyVoice-300M-SFT用于SFT推理,或iic/CosyVoice-300M-Instruct用于指令推理
python3 webui.py --port 50000 --model_dir pretrained_models/CosyVoice-300M
高级用法
对于高级用户,我们在 examples/libritts/cosyvoice/run.sh
中提供了训练和推理脚本。你可以按照这个指南熟悉CosyVoice。
构建部署
如果你想使用gRPC进行服务部署,可以执行以下步骤。否则,你可以忽略此步骤。
cd runtime/python
docker build -t cosyvoice:v1.0 .
# 如果你想使用指令推理,将iic/CosyVoice-300M更改为iic/CosyVoice-300M-Instruct
# for grpc usage
docker run -d --runtime=nvidia -p 50000:50000 cosyvoice:v1.0 /bin/bash -c "cd /opt/CosyVoice/CosyVoice/runtime/python/grpc && python3 server.py --port 50000 --max_conc 4 --model_dir iic/CosyVoice-300M && sleep infinity"
cd grpc && python3 client.py --port 50000 --mode <sft|zero_shot|cross_lingual|instruct>
# for fastapi usage
docker run -d --runtime=nvidia -p 50000:50000 cosyvoice:v1.0 /bin/bash -c "cd /opt/CosyVoice/CosyVoice/runtime/python/fastapi && MODEL_DIR=iic/CosyVoice-300M fastapi dev --port 50000 server.py && sleep infinity"
cd fastapi && python3 client.py --port 50000 --mode <sft|zero_shot|cross_lingual|instruct>
📚 详细文档
讨论与交流
你可以直接在 Github Issues 上进行讨论。你也可以扫描二维码加入我们的官方钉钉聊天群。
致谢
我们借鉴了以下项目的许多代码:
免责声明
上述内容仅用于学术目的,旨在展示技术能力。部分示例来源于互联网。如果任何内容侵犯了你的权益,请联系我们要求删除。




