đ WavLM-Large
Microsoft's WavLM is a large model pretrained on 16kHz sampled speech audio. It aims to solve full - stack downstream speech tasks.
đ Quick Start
This is an English pre - trained speech model. It must be fine - tuned on a downstream task like speech recognition or audio classification before it can be used in inference. The model was pre - trained in English and thus performs well only in English. It has shown good performance on the SUPERB benchmark.
⨠Features
- Universal Representation Learning: Tries to learn universal representations for all speech tasks, considering multi - faceted information in speech signals such as speaker identity, paralinguistics, and spoken content.
- Improved Transformer Structure: Equips the Transformer structure with gated relative position bias to enhance its performance on recognition tasks.
- Utterance Mixing Training: Proposes an utterance mixing training strategy for better speaker discrimination, creating and incorporating additional overlapped utterances during training.
- Large - Scale Training: Scales up the training dataset from 60k hours to 94k hours.
đĻ Installation
The README does not provide specific installation steps, so this section is skipped.
đģ Usage Examples
Speech Recognition
To fine - tune the model for speech recognition, see [the official speech recognition example](https://github.com/huggingface/transformers/tree/master/examples/pytorch/speech - recognition).
Speech Classification
To fine - tune the model for speech classification, see [the official audio classification example](https://github.com/huggingface/transformers/tree/master/examples/pytorch/audio - classification).
Speaker Verification
TODO
Speaker Diarization
TODO
đ Documentation
Model Details
- Pretrained on:
- Paper: WavLM: Large - Scale Self - Supervised Pre - Training for Full Stack Speech Processing
- Authors: Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei
Abstract
Self - supervised learning (SSL) achieves great success in speech recognition, while limited exploration has been attempted for other speech processing tasks. As speech signal contains multi - faceted information including speaker identity, paralinguistics, spoken content, etc., learning universal representations for all speech tasks is challenging. In this paper, we propose a new pre - trained model, WavLM, to solve full - stack downstream speech tasks. WavLM is built based on the HuBERT framework, with an emphasis on both spoken content modeling and speaker identity preservation. We first equip the Transformer structure with gated relative position bias to improve its capability on recognition tasks. For better speaker discrimination, we propose an utterance mixing training strategy, where additional overlapped utterances are created unsupervisely and incorporated during model training. Lastly, we scale up the training dataset from 60k hours to 94k hours. WavLM Large achieves state - of - the - art performance on the SUPERB benchmark, and brings significant improvements for various speech processing tasks on their representative benchmarks.
Important Notes
- When using the model, ensure that your speech input is sampled at 16kHz.
- This model does not have a tokenizer as it was pretrained on audio alone. For speech recognition, a tokenizer should be created and the model should be fine - tuned on labeled text data. Refer to [this blog](https://huggingface.co/blog/fine - tune - wav2vec2 - english) for more details on fine - tuning.
- The model was pre - trained on phonemes rather than characters. So, make sure that the input text is converted to a sequence of phonemes before fine - tuning.
đ§ Technical Details
The README does not provide specific technical implementation details, so this section is skipped.
đ License
The official license can be found here

đ¤ Contribution
The model was contributed by cywang and patrickvonplaten.