đ S2T-SMALL-MUSTC-EN-ES-ST
s2t-small-mustc-en-es-st
is an end - to - end Speech Translation (ST) model based on the Speech to Text Transformer (S2T) architecture, enabling English speech to Spanish text translation.
đ Quick Start
This model is a standard sequence - to - sequence transformer model. You can use the generate
method to generate transcripts by passing speech features to the model.
â ī¸ Important Note
The Speech2TextProcessor
object uses torchaudio to extract the filter bank features. Make sure to install the torchaudio
package before running this example. You could either install those as extra speech dependancies with pip install transformers"[speech, sentencepiece]"
or install the packages separately with pip install torchaudio sentencepiece
.
import torch
from transformers import Speech2TextProcessor, Speech2TextForConditionalGeneration
from datasets import load_dataset
import soundfile as sf
model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-small-mustc-en-es-st")
processor = Speech2TextProcessor.from_pretrained("facebook/s2t-small-mustc-en-es-st")
def map_to_array(batch):
speech, _ = sf.read(batch["file"])
batch["speech"] = speech
return batch
ds = load_dataset(
"patrickvonplaten/librispeech_asr_dummy",
"clean",
split="validation"
)
ds = ds.map(map_to_array)
inputs = processor(
ds["speech"][0],
sampling_rate=16_000,
return_tensors="pt"
)
generated_ids = model.generate(input_ids=inputs["input_features"], attention_mask=inputs["attention_mask"])
translation = processor.batch_decode(generated_ids, skip_special_tokens=True)
⨠Features
- End - to - end ST: Capable of directly translating English speech to Spanish text.
- Transformer - based: Utilizes a transformer - based seq2seq (encoder - decoder) architecture.
- Pre - trained Encoder: The encoder is pre - trained for English ASR, accelerating training and improving performance.
đĻ Installation
You can install the necessary packages as extra speech dependencies with the following command:
pip install transformers"[speech, sentencepiece]"
Or install the packages separately:
pip install torchaudio sentencepiece
đģ Usage Examples
Basic Usage
import torch
from transformers import Speech2TextProcessor, Speech2TextForConditionalGeneration
from datasets import load_dataset
import soundfile as sf
model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-small-mustc-en-es-st")
processor = Speech2TextProcessor.from_pretrained("facebook/s2t-small-mustc-en-es-st")
def map_to_array(batch):
speech, _ = sf.read(batch["file"])
batch["speech"] = speech
return batch
ds = load_dataset(
"patrickvonplaten/librispeech_asr_dummy",
"clean",
split="validation"
)
ds = ds.map(map_to_array)
inputs = processor(
ds["speech"][0],
sampling_rate=16_000,
return_tensors="pt"
)
generated_ids = model.generate(input_ids=inputs["input_features"], attention_mask=inputs["attention_mask"])
translation = processor.batch_decode(generated_ids, skip_special_tokens=True)
đ Documentation
Model description
S2T is a transformer - based seq2seq (encoder - decoder) model designed for end - to - end Automatic Speech Recognition (ASR) and Speech Translation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are fed into the encoder. The model is trained with standard autoregressive cross - entropy loss and generates the transcripts/translations autoregressively.
Intended uses & limitations
This model can be used for end - to - end English speech to Spanish text translation. See the model hub to look for other S2T checkpoints.
đ§ Technical Details
Training data
The s2t - small - mustc - en - es - st is trained on the English - Spanish subset of [MuST - C](https://ict.fbk.eu/must - c/). MuST - C is a multilingual speech translation corpus whose size and quality facilitates the training of end - to - end systems for speech translation from English into several languages. For each target language, MuST - C comprises several hundred hours of audio recordings from English TED Talks, which are automatically aligned at the sentence level with their manual transcriptions and translations.
Training procedure
Preprocessing
The speech data is pre - processed by extracting Kaldi - compliant 80 - channel log mel - filter bank features automatically from WAV/FLAC audio files via PyKaldi or torchaudio. Further utterance - level CMVN (cepstral mean and variance normalization) is applied to each example.
The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 8,000.
Training
The model is trained with standard autoregressive cross - entropy loss and using SpecAugment. The encoder receives speech features, and the decoder generates the transcripts autoregressively. To accelerate model training and for better performance, the encoder is pre - trained for English ASR.
Evaluation results
MuST - C test results for en - es (BLEU score): 27.2
đ License
This model is released under the MIT license.
BibTeX entry and citation info
@inproceedings{wang2020fairseqs2t,
title = {fairseq S2T: Fast Speech-to-Text Modeling with fairseq},
author = {Changhan Wang and Yun Tang and Xutai Ma and Anne Wu and Dmytro Okhonko and Juan Pino},
booktitle = {Proceedings of the 2020 Conference of the Asian Chapter of the Association for Computational Linguistics (AACL): System Demonstrations},
year = {2020},
}
Information Table
Property |
Details |
Model Type |
Speech to Text Transformer (S2T) for end - to - end Speech Translation (ST) |
Training Data |
English - Spanish subset of [MuST - C](https://ict.fbk.eu/must - c/) |