🚀 Wav2vec2 Large 100k Voxpopuli 基於Common Voice和M - AILABS的俄語微調模型
本項目是將 Wav2vec2 Large 100k Voxpopuli 模型使用Common Voice 7.0和M - AILABS數據集進行俄語微調後的成果,可用於俄語的自動語音識別任務。
🚀 快速開始
安裝依賴
本項目使用Python和相關的深度學習庫,你可以通過以下方式安裝所需的庫:
pip install transformers torchaudio datasets jiwer
加載模型和分詞器
from transformers import AutoTokenizer, Wav2Vec2ForCTC
tokenizer = AutoTokenizer.from_pretrained("Edresson/wav2vec2-large-100k-voxpopuli-ft-Common-Voice_plus_TTS-Dataset-russian")
model = Wav2Vec2ForCTC.from_pretrained("Edresson/wav2vec2-large-100k-voxpopuli-ft-Common-Voice_plus_TTS-Dataset-russian")
💻 使用示例
基礎用法
以下代碼展示瞭如何使用該模型進行語音識別:
from transformers import AutoTokenizer, Wav2Vec2ForCTC
import torch
import torchaudio
tokenizer = AutoTokenizer.from_pretrained("Edresson/wav2vec2-large-100k-voxpopuli-ft-Common-Voice_plus_TTS-Dataset-russian")
model = Wav2Vec2ForCTC.from_pretrained("Edresson/wav2vec2-large-100k-voxpopuli-ft-Common-Voice_plus_TTS-Dataset-russian")
audio_file = "your_audio_file.wav"
waveform, sample_rate = torchaudio.load(audio_file)
resampler = torchaudio.transforms.Resample(orig_freq=sample_rate, new_freq=16000)
waveform = resampler(waveform)
input_values = tokenizer(waveform.squeeze().numpy(), return_tensors="pt").input_values
with torch.no_grad():
logits = model(input_values).logits
predicted_ids = torch.argmax(logits, dim=-1)
transcription = tokenizer.decode(predicted_ids[0])
print("識別結果:", transcription)
高級用法
使用Common Voice數據集進行測試
from datasets import load_dataset
import torchaudio
import re
from jiwer import wer
dataset = load_dataset("common_voice", "pt", split="test", data_dir="./cv-corpus-6.1-2020-12-11")
resampler = torchaudio.transforms.Resample(orig_freq=48_000, new_freq=16_000)
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"]'
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
batch["speech"] = resampler.forward(speech.squeeze(0)).numpy()
batch["sampling_rate"] = resampler.new_freq
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'")
return batch
ds = dataset.map(map_to_array)
def map_to_pred(batch):
input_values = tokenizer(batch["speech"], return_tensors="pt").input_values
with torch.no_grad():
logits = model(input_values).logits
predicted_ids = torch.argmax(logits, dim=-1)
batch["predicted"] = tokenizer.decode(predicted_ids[0])
return batch
result = ds.map(map_to_pred, batched=True, batch_size=1, remove_columns=list(ds.features.keys()))
print(wer.compute(predictions=result["predicted"], references=result["target"]))
📚 詳細文檔
模型信息
屬性 |
詳情 |
模型類型 |
Wav2vec2 Large 100k Voxpopuli 俄語微調模型 |
訓練數據 |
Common Voice 7.0和M - AILABS |
評估指標 |
字錯率(WER) |
結果查看
如需查看詳細的實驗結果,請參考 論文。
📄 許可證
本項目採用Apache - 2.0許可證。