🚀 SEW-D-tiny
SEW-D-tiny是基於16kHz採樣語音音頻預訓練的基礎模型。該模型可用於自動語音識別、說話人識別、意圖分類、情感識別等下游任務。使用時,請確保輸入的語音也採樣為16kHz。
🔍 模型信息
屬性 |
詳情 |
模型類型 |
語音識別模型 |
訓練數據 |
LibriSpeech ASR 數據集 |
標籤 |
音頻、語音、自動語音識別、HF自動語音識別排行榜 |
許可證 |
Apache-2.0 |
📚 相關鏈接
📖 論文信息
- 標題:Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition
- 作者:Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi
- 摘要:本文研究了自動語音識別(ASR)預訓練模型中的性能 - 效率權衡問題。聚焦於wav2vec 2.0,本文提出了幾種影響模型性能和效率的架構設計。綜合所有觀察結果,引入了SEW(Squeezed and Efficient Wav2vec),這是一種在各種訓練設置下,在性能和效率方面都有顯著改進的預訓練模型架構。例如,在LibriSpeech的100h - 960h半監督設置下,與wav2vec 2.0相比,SEW的推理速度提高了1.9倍,單詞錯誤率相對降低了13.5%。在相似的推理時間內,SEW在不同模型大小下將單詞錯誤率降低了25 - 50%。
- 原始模型:https://github.com/asappresearch/sew#model-checkpoints
🚀 快速開始
💻 使用示例
基礎用法
from transformers import Wav2Vec2Processor, SEWDForCTC
from datasets import load_dataset
import soundfile as sf
import torch
processor = Wav2Vec2Processor.from_pretrained("asapp/sew-d-tiny-100k-ft-ls100h")
model = SEWDForCTC.from_pretrained("asapp/sew-d-tiny-100k-ft-ls100h")
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
input_values = processor(ds[0]["audio"]["array"], return_tensors="pt").input_values
logits = model(input_values).logits
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
評估示例
from datasets import load_dataset
from transformers import SEWDForCTC, Wav2Vec2Processor
import torch
from jiwer import wer
librispeech_eval = load_dataset("librispeech_asr", "clean", split="test")
model = SEWDForCTC.from_pretrained("asapp/sew-d-tiny-100k-ft-ls100h").to("cuda")
processor = Wav2Vec2Processor.from_pretrained("asapp/sew-d-tiny-100k-ft-ls100h")
def map_to_pred(batch):
input_values = processor(batch["audio"][0]["array"], sampling_rate=16000,
return_tensors="pt", padding="longest").input_values
with torch.no_grad():
logits = model(input_values.to("cuda")).logits
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
batch["transcription"] = transcription
return batch
result = librispeech_eval.map(map_to_pred, batched=True, batch_size=1, remove_columns=["audio"])
print("WER:", wer(result["text"], result["transcription"]))
📊 評估結果
"clean" |
"other" |
10.47 |
22.73 |
📄 許可證
本項目採用Apache-2.0許可證。