🚀 OuteTTS-0.2-500M-GGUF
This project provides a quantized version of the OuteTTS-0.2-500M model, enabling efficient text-to-speech conversion. It supports multiple languages and can be easily integrated into the LlamaEdge service.
📦 Model Information
Property |
Details |
Base Model |
OuteAI/OuteTTS-0.2-500M |
Model Creator |
OuteAI |
Model Name |
OuteTTS-0.2-500M |
Quantized By |
Second State Inc. |
License |
cc-by-nc-4.0 |
Languages |
English, Chinese, Japanese, Korean |
Pipeline Tag |
text-to-speech |
🚀 Quick Start
Run with LlamaEdge
Run as LlamaEdge service
wasmedge --dir .:. \
--nn-preload tts:GGML:AUTO:OuteTTS-0.2-500M-Q5_K_M.gguf \
llama-api-server.wasm config \
--file llama_server_config.toml \
--tts
llama_server_config.toml
can be derived from the template config file llama_server_config.toml.bkp. The recommended [tts]
config is shown as below:
[tts]
model_name = "tts"
model_alias = "tts"
codec_model = ""
speaker_file = ""
ctx_size = 8192
batch_size = 8192
ubatch_size = 8192
n_predict = 4096
n_gpu_layers = 100
temp = 0.8
💻 Quantized GGUF Models
Name |
Quant method |
Bits |
Size |
Use case |
OuteTTS-0.2-500M-Q2_K.gguf |
Q2_K |
2 |
344 MB |
smallest, significant quality loss - not recommended for most purposes |
OuteTTS-0.2-500M-Q3_K_L.gguf |
Q3_K_L |
3 |
375 MB |
small, substantial quality loss |
OuteTTS-0.2-500M-Q3_K_M.gguf |
Q3_K_M |
3 |
361 MB |
very small, high quality loss |
OuteTTS-0.2-500M-Q3_K_S.gguf |
Q3_K_S |
3 |
344 MB |
very small, high quality loss |
OuteTTS-0.2-500M-Q4_0.gguf |
Q4_0 |
4 |
358 MB |
legacy; small, very high quality loss - prefer using Q3_K_M |
OuteTTS-0.2-500M-Q4_K_M.gguf |
Q4_K_M |
4 |
403 MB |
medium, balanced quality - recommended |
OuteTTS-0.2-500M-Q4_K_S.gguf |
Q4_K_S |
4 |
391 MB |
small, greater quality loss |
OuteTTS-0.2-500M-Q5_0.gguf |
Q5_0 |
5 |
402 MB |
legacy; medium, balanced quality - prefer using Q4_K_M |
OuteTTS-0.2-500M-Q5_K_M.gguf |
Q5_K_M |
5 |
426 MB |
large, very low quality loss - recommended |
OuteTTS-0.2-500M-Q5_K_S.gguf |
Q5_K_S |
5 |
418 MB |
large, low quality loss - recommended |
OuteTTS-0.2-500M-Q6_K.gguf |
Q6_K |
6 |
511 MB |
very large, extremely low quality loss |
OuteTTS-0.2-500M-Q8_0.gguf |
Q8_0 |
8 |
537 MB |
very large, extremely low quality loss - not recommended |
OuteTTS-0.2-500M-f16.gguf |
f16 |
16 |
1.00 GB |
|
wavtokenizer-large-75-ggml-f16.gguf |
f16 |
16 |
1.00 GB |
|
Quantized with llama.cpp b4381