๐ XPhoneBERT: A Pre-trained Multilingual Model for Phoneme Representations for Text-to-Speech
XPhoneBERT is the first pre-trained multilingual model for phoneme representations in text-to-speech (TTS). It shares the same architecture as BERT-base and is trained using the RoBERTa pre-training approach on 330M phoneme-level sentences from nearly 100 languages and locales. Experimental results indicate that using XPhoneBERT as an input phoneme encoder significantly enhances the performance of a strong neural TTS model in terms of naturalness and prosody. It also aids in generating fairly high-quality speech with limited training data.
The general architecture and experimental results of XPhoneBERT can be found in our INTERSPEECH 2023 paper:
@inproceedings{xphonebert,
title = {{XPhoneBERT: A Pre-trained Multilingual Model for Phoneme Representations for Text-to-Speech}},
author = {Linh The Nguyen and Thinh Pham and Dat Quoc Nguyen},
booktitle = {Proceedings of the 24th Annual Conference of the International Speech Communication Association (INTERSPEECH)},
year = {2023},
pages = {5506--5510}
}
โ ๏ธ Important Note
Please CITE our paper when XPhoneBERT is used to help produce published results or is incorporated into other software.
For further information or requests, please visit XPhoneBERT's homepage!
๐ Quick Start
โจ Features
- First pre-trained multilingual model for phoneme representations in TTS.
- Same architecture as BERT-base, trained with RoBERTa approach.
- Improves TTS model performance in naturalness and prosody.
- Helps generate high-quality speech with limited training data.
๐ฆ Installation
- Install
transformers
with pip: pip install transformers
, or install transformers
from source.
- Install
text2phonemesequence
: pip install text2phonemesequence
Our text2phonemesequence
package is used to convert text sequences into phoneme-level sequences, which is employed to construct our multilingual phoneme-level pre-training data. We build text2phonemesequence
by incorporating the CharsiuG2P and the segments toolkits that perform text-to-phoneme conversion and phoneme segmentation, respectively.
โ ๏ธ Important Note
- Initializing
text2phonemesequence
for each language requires its corresponding ISO 639-3 code. The ISO 639-3 codes of supported languages are available at HERE.
text2phonemesequence
takes a word-segmented sequence as input. And users might also perform text normalization on the word-segmented sequence before feeding into text2phonemesequence
. When creating our pre-training data, we perform word and sentence segmentation on all text documents in each language by using the spaCy toolkit, except for Vietnamese where we employ the VnCoreNLP toolkit. We also use the text normalization component from the NVIDIA NeMo toolkit for English, German, Spanish and Chinese, and the Vinorm text normalization package for Vietnamese.
๐ Documentation
Pre-trained model
Property |
Details |
Model |
vinai/xphonebert-base |
#params |
88M |
Arch. |
base |
Max length |
512 |
Pre-training data |
330M phoneme-level sentences from nearly 100 languages and locales |
๐ป Usage Examples
Basic Usage
from transformers import AutoModel, AutoTokenizer
from text2phonemesequence import Text2PhonemeSequence
xphonebert = AutoModel.from_pretrained("vinai/xphonebert-base")
tokenizer = AutoTokenizer.from_pretrained("vinai/xphonebert-base")
text2phone_model = Text2PhonemeSequence(language='jpn', is_cuda=True)
sentence = "ใใ ใฏ ใ ใในใ ใใญในใ ใงใ ."
input_phonemes = text2phone_model.infer_sentence(sentence)
input_ids = tokenizer(input_phonemes, return_tensors="pt")
with torch.no_grad():
features = xphonebert(**input_ids)
๐ License
This project is licensed under the MIT license.