🚀 ⓍTTS
ⓍTTS is a voice generation model that enables voice cloning across different languages using just a 6 - second audio clip, eliminating the need for extensive training data spanning countless hours. It's the same or similar model powering Coqui Studio and Coqui API.
🚀 Quick Start
Using 🐸TTS API
from TTS.api import TTS
tts = TTS("tts_models/multilingual/multi-dataset/xtts_v2", gpu=True)
tts.tts_to_file(text="It took me quite a long time to develop a voice, and now that I have it I'm not going to be silent.",
file_path="output.wav",
speaker_wav="/path/to/target/speaker.wav",
language="en")
Using 🐸TTS Command line
tts --model_name tts_models/multilingual/multi-dataset/xtts_v2 \
--text "Bugün okula gitmek istemiyorum." \
--speaker_wav /path/to/target/speaker.wav \
--language_idx tr \
--use_cuda true
Using the model directly
from TTS.tts.configs.xtts_config import XttsConfig
from TTS.tts.models.xtts import Xtts
config = XttsConfig()
config.load_json("/path/to/xtts/config.json")
model = Xtts.init_from_config(config)
model.load_checkpoint(config, checkpoint_dir="/path/to/xtts/", eval=True)
model.cuda()
outputs = model.synthesize(
"It took me quite a long time to develop a voice and now that I have it I am not going to be silent.",
config,
speaker_wav="/data/TTS-public/_refclips/3.wav",
gpt_cond_len=3,
language="en",
)
✨ Features
- Supports 17 languages.
- Voice cloning with just a 6 - second audio clip.
- Emotion and style transfer by cloning.
- Cross - language voice cloning.
- Multi - lingual speech generation.
- 24khz sampling rate.
📚 Documentation
Updates over XTTS - v1
- 2 new languages; Hungarian and Korean
- Architectural improvements for speaker conditioning.
- Enables the use of multiple speaker references and interpolation between speakers.
- Stability improvements.
- Better prosody and audio quality across the board.
Languages
XTTS - v2 supports 17 languages: English (en), Spanish (es), French (fr), German (de), Italian (it), Portuguese (pt), Polish (pl), Turkish (tr), Russian (ru), Dutch (nl), Czech (cs), Arabic (ar), Chinese (zh - cn), Japanese (ja), Hungarian (hu), Korean (ko), Hindi (hi).
Stay tuned as we continue to add support for more languages. If you have any language requests, feel free to reach out!
Code
The [code - base](https://github.com/coqui - ai/TTS) supports inference and fine - tuning.
Demo Spaces
- XTTS Space : You can see how model performs on supported languages, and try with your own reference or microphone input
- [XTTS Voice Chat with Mistral or Zephyr](https://huggingface.co/spaces/coqui/voice - chat - with - mistral) : You can experience streaming voice chat with Mistral 7B Instruct or Zephyr 7B Beta
📄 License
This model is licensed under Coqui Public Model License. There's a lot that goes into a license for generative models, and you can read more of the origin story of CPML here.
📞 Contact
Come and join in our 🐸Community. We're active on Discord and Twitter. You can also mail us at info@coqui.ai.