Model Overview
Model Features
Model Capabilities
Use Cases
đ Whisper-Large-V3-Distil-French-v0.2
A distilled version of Whisper optimized for French speech-to-text, with enhanced long - form transcription capabilities.
This model is a distilled version of Whisper with 2 decoder layers, specifically tailored for French speech - to - text tasks. Compared to v0.1, it extends the training to 30 - second audio segments to maintain long - form transcription abilities. During the distillation process, a "patient" teacher was used, which means longer training times and more aggressive data augmentation, leading to improved overall performance.
The model uses openai/whisper-large-v3 as the teacher model while keeping the encoder architecture unchanged. It can be used as a draft model for speculative decoding, potentially achieving 2x inference speed while maintaining identical outputs by only adding 2 extra decoder layers and running the encoder just once. It can also serve as a standalone model, trading some accuracy for better efficiency, running 5.8x faster while using only 49% of the parameters. According to this paper, the distilled model may produce fewer hallucinations than the full model during long - form transcription.
The model has been converted into multiple formats to ensure broad compatibility across libraries including transformers, openai - whisper, faster - whisper, whisper.cpp, candle, mlx.
đ Quick Start
This section will guide you through the basic steps to use the Whisper - Large - V3 - Distil - French - v0.2 model.
⨠Features
- Optimized for French: Specifically designed for French speech - to - text tasks.
- Long - form Transcription: Extended training on 30 - second audio segments to maintain long - form transcription abilities.
- Efficient Decoding: Can be used for speculative decoding, achieving 2x inference speed.
- High Compatibility: Converted into multiple formats for broad library compatibility.
đ Documentation
Performance
The model was evaluated on both short and long - form transcriptions, using in - distribution (ID) and out - of - distribution (OOD) datasets to assess accuracy, generalizability, and robustness.
Note that Word Error Rate (WER) results shown here are post - normalization, which includes converting text to lowercase and removing symbols and punctuation.
All evaluation results on the public datasets can be found here.
Short - Form Transcription
Italic indicates in - distribution (ID) evaluation, where test sets correspond to data distributions seen during training, typically yielding higher performance than out - of - distribution (OOD) evaluation. Italic and strikethrough denotes potential test set contamination - for example, when training and evaluation use different versions of Common Voice, raising the possibility of overlapping data.
Due to the limited availability of out - of - distribution (OOD) and long - form French test sets, evaluation was also performed using internal test sets from Zaion Lab - consisting of human - annotated call center conversations with significant background noise and domain - specific terminology.
Long - Form Transcription
Long - form transcription evaluation used the đ¤ Hugging Face pipeline
with both [chunked](https://huggingface.co/blog/asr - chunking) (chunk_length_s = 30) and original sequential decoding methods.
Usage
Hugging Face Pipeline
The model can be easily used with the đ¤ Hugging Face pipeline
class for audio transcription. For long - form transcription (over 30 seconds), it will perform sequential decoding as described in OpenAI's paper. If you need faster inference, you can use the chunk_length_s
argument for [chunked parallel decoding](https://huggingface.co/blog/asr - chunking), which provides 9x faster inference speed but may slightly compromise performance compared to OpenAI's sequential algorithm.
import torch
from datasets import load_dataset
from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor, pipeline
device = "cuda:0" if torch.cuda.is_available() else "cpu"
torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32
# Load model
model_name_or_path = "bofenghuang/whisper-large-v3-distil-fr-v0.2"
processor = AutoProcessor.from_pretrained(model_name_or_path)
model = AutoModelForSpeechSeq2Seq.from_pretrained(
model_name_or_path,
torch_dtype=torch_dtype,
low_cpu_mem_usage=True,
)
model.to(device)
# Init pipeline
pipe = pipeline(
"automatic-speech-recognition",
model=model,
feature_extractor=processor.feature_extractor,
tokenizer=processor.tokenizer,
torch_dtype=torch_dtype,
device=device,
# chunk_length_s=30, # for chunked decoding
max_new_tokens=128,
)
# Example audio
dataset = load_dataset("bofenghuang/asr-dummy", "fr", split="test")
sample = dataset[0]["audio"]
# Run pipeline
result = pipe(sample)
print(result["text"])
Hugging Face Low - level APIs
You can also use the đ¤ Hugging Face low - level APIs for transcription, offering greater control over the process, as demonstrated below:
import torch
from datasets import load_dataset
from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor
device = "cuda:0" if torch.cuda.is_available() else "cpu"
torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32
# Load model
model_name_or_path = "bofenghuang/whisper-large-v3-distil-fr-v0.2"
processor = AutoProcessor.from_pretrained(model_name_or_path)
model = AutoModelForSpeechSeq2Seq.from_pretrained(
model_name_or_path,
torch_dtype=torch_dtype,
low_cpu_mem_usage=True,
)
model.to(device)
# Example audio
dataset = load_dataset("bofenghuang/asr-dummy", "fr", split="test")
sample = dataset[0]["audio"]
# Extract feautres
input_features = processor(
sample["array"], sampling_rate=sample["sampling_rate"], return_tensors="pt"
).input_features
# Generate tokens
predicted_ids = model.generate(
input_features.to(dtype=torch_dtype).to(device), max_new_tokens=128
)
# Detokenize to text
transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)[0]
print(transcription)
Speculative Decoding
[Speculative decoding](https://huggingface.co/blog/whisper - speculative - decoding) can be achieved using a draft model, essentially a distilled version of Whisper. This approach guarantees identical outputs to using the main Whisper model alone, offers a 2x faster inference speed, and incurs only a slight increase in memory overhead.
Since the distilled Whisper has the same encoder as the original, only its decoder need to be loaded, and encoder outputs are shared between the main and draft models during inference.
Using speculative decoding with the Hugging Face pipeline is simple - just specify the assistant_model
within the generation configurations.
import torch
from datasets import load_dataset
from transformers import (
AutoModelForCausalLM,
AutoModelForSpeechSeq2Seq,
AutoProcessor,
pipeline,
)
device = "cuda:0" if torch.cuda.is_available() else "cpu"
torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32
# Load model
model_name_or_path = "openai/whisper-large-v3"
processor = AutoProcessor.from_pretrained(model_name_or_path)
model = AutoModelForSpeechSeq2Seq.from_pretrained(
model_name_or_path,
torch_dtype=torch_dtype,
low_cpu_mem_usage=True,
)
model.to(device)
# Load draft model
assistant_model_name_or_path = "bofenghuang/whisper-large-v3-distil-fr-v0.2"
assistant_model = AutoModelForCausalLM.from_pretrained(
assistant_model_name_or_path,
torch_dtype=torch_dtype,
low_cpu_mem_usage=True,
)
assistant_model.to(device)
# Init pipeline
pipe = pipeline(
"automatic-speech-recognition",
model=model,
feature_extractor=processor.feature_extractor,
tokenizer=processor.tokenizer,
torch_dtype=torch_dtype,
device=device,
generate_kwargs={"assistant_model": assistant_model},
max_new_tokens=128,
)
# Example audio
dataset = load_dataset("bofenghuang/asr-dummy", "fr", split="test")
sample = dataset[0]["audio"]
# Run pipeline
result = pipe(sample)
print(result["text"])
OpenAI Whisper
You can also employ the sequential long - form decoding algorithm with a sliding window and temperature fallback, as outlined by OpenAI in their original paper.
First, install the openai - whisper package:
pip install -U openai-whisper
Then, download the converted model:
huggingface-cli download --include original_model.pt --local-dir ./models/whisper-large-v3-distil-fr-v0.2 bofenghuang/whisper-large-v3-distil-fr-v0.2
Now, you can transcirbe audio files by following the usage instructions provided in the repository:
import whisper
from datasets import load_dataset
# Load model
model_name_or_path = "./models/whisper-large-v3-distil-fr-v0.2/original_model.pt"
model = whisper.load_model(model_name_or_path)
# Example audio
dataset = load_dataset("bofenghuang/asr-dummy", "fr", split="test")
sample = dataset[0]["audio"]["array"].astype("float32")
# Transcribe
result = model.transcribe(sample, language="fr")
print(result["text"])
Faster Whisper
Faster Whisper is a reimplementation of OpenAI's Whisper models and the sequential long - form decoding algorithm in the CTranslate2 library.
đ License
This project is licensed under the MIT license.

