🚀 CrisperWhisper
CrisperWhisper is an advanced speech recognition model. It's a variant of OpenAI's Whisper, designed for fast, precise, and verbatim speech recognition with accurate word - level timestamps. It aims to transcribe every spoken word exactly as it is, including fillers, pauses, stutters and false starts.
📦 Model Information
Property |
Details |
License |
CC - BY - NC - 4.0 |
Languages Supported |
German, English |
Base Model |
openai/whisper - large - v3, nyrahealth/CrisperWhisper |
Metrics |
CER, WER |
Pipeline Tag |
Automatic Speech Recognition |
Library Name |
transformers |
📢 Additional Information
✨ Features
- 🎯 Accurate Word - Level Timestamps: Provides precise timestamps, even around disfluencies and pauses, by utilizing an adjusted tokenizer and a custom attention loss during training.
- 📝 Verbatim Transcription: Transcribes every spoken word exactly as it is, including and differentiating fillers like "um" and "uh".
- 🔍 Filler Detection: Detects and accurately transcribes fillers.
- 🛡️ Hallucination Mitigation: Minimizes transcription hallucinations to enhance accuracy.
📈 Performance Overview
🔍 Qualitative Performance Overview
Audio |
Whisper Large V3 |
Crisper Whisper |
Demo de 1 |
Er war kein Genie, aber doch ein fähiger Ingenieur. |
Es ist zwar kein. Er ist zwar kein Genie, aber doch ein fähiger Ingenieur. |
Demo de 2 |
Leider müssen wir in diesen schweren Zeiten auch unserem Tagesgeschäft nachgehen. Der hier vorgelegte Kulturhaushalt der Ampelregierung strebt an, den Erfolgskurs der Union zumindest fiskalisch fortzuführen. |
Leider [UH] müssen wir in diesen [UH] schweren Zeiten auch [UH] unserem [UH] Tagesgeschäft nachgehen. Der hier [UH] vorgelegte [UH] Kulturhaushalt der [UH] Ampelregierung strebt an, den [UH] Erfolgskurs der Union [UH] zumindest [UH] fiskalisch fortzuführen. Es. |
Demo de 3 |
die über alle FRA - Fraktionen hinweg gut im Blick behalten sollten, auch weil sie teilweise sehr teeteuer sind. Aber nicht nur, weil sie teeteuer sind. Wir steigen mit diesem Endentwurf ein in die sogenannten Pandemie - Bereitschaftsverträge. |
Die über alle Fr Fraktionen hinweg gut im [UH] Blick behalten sollten, auch weil sie teil teilweise sehr te teuer sind. Aber nicht nur, weil sie te teuer sind. Wir [UH] steigen mit diesem Ent Entwurf ein in die sogenannten Pand Pandemiebereitschaftsverträge. |
Demo en 1 |
alternative is you can get like, you have those Dr. Bronner's |
Alternative is you can get like [UH] you have those, you know, those doctor Brahmer's. |
Demo en 2 |
influence our natural surrounding? How does it influence our ecosystem? |
Influence our [UM] our [UH] our natural surrounding. How does it influence our ecosystem? |
Demo en 3 |
and always find a place on the street to park and it was easy and you weren't a long distance away from wherever it was that you were trying to go. So I remember that being a lot of fun and easy to do and there were nice places to go and good events to attend. Come downtown and you had the Warner Theater and |
And always find a place on the street to park. And and it was it was easy and you weren't a long distance away from wherever it was that you were trying to go. So, I I I remember that being a lot of fun and easy to do and there were nice places to go and, [UM] i good events to attend. Come downtown and you had the Warner Theater and, [UM] |
Demo en 4 |
you know, more masculine, who were rough, and that definitely wasn't me. Then, you know, I was very smart because my father made sure I was smart, you know. So, you know, I hung around those people, you know. And then you had the ones that were just out doing things that they shouldn't have been doing also. So, yeah, I was in the little geek squad. You were in the little geek squad. Yeah. |
you know, more masculine, who were rough, and that definitely wasn't me. Then, you know, I was very smart because my father made sure I was smart. You know, so, [UM] you know, I I hung around those people, you know. And then you had the ones that were just just out doing things that they shouldn't have been doing also. So yeah, I was the l I was in the little geek squad. Do you |
📊 Quantitative Performance Overview
Transcription Performance
CrisperWhisper significantly outperforms Whisper Large v3, especially on datasets that have a more verbatim transcription style in the ground truth, such as AMI and TED - LIUM.
Segmentation Performance
CrisperWhisper demonstrates superior performance segmentation performance. This performance gap is especially pronounced around disfluencies and pauses.
The following table uses the metrics as defined in the paper. For this table we used a collar of 50ms. Heads for each Model were selected using the method described in the How section and the result attaining the highest F1 Score was choosen for each model using varying number of heads.
Dataset |
Metric |
CrisperWhisper |
Whisper Large v2 |
Whisper Large v3 |
AMI IHM |
F1 Score |
0.79 |
0.63 |
0.66 |
|
Avg IOU |
0.67 |
0.54 |
0.53 |
Common Voice |
F1 Score |
0.80 |
0.42 |
0.48 |
|
Avg IOU |
0.70 |
0.32 |
0.43 |
TIMIT |
F1 Score |
0.69 |
0.40 |
0.54 |
|
Avg IOU |
0.56 |
0.32 |
0.43 |
💻 Usage Examples
Basic Usage
First, install our custom transformers fork for the most accurate timestamps:
pip install git+https://github.com/nyrahealth/transformers.git@crisper_whisper
Advanced Usage
import os
import sys
import torch
from datasets import load_dataset
from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor, pipeline
def adjust_pauses_for_hf_pipeline_output(pipeline_output, split_threshold=0.12):
"""
Adjust pause timings by distributing pauses up to the threshold evenly between adjacent words.
"""
adjusted_chunks = pipeline_output["chunks"].copy()
for i in range(len(adjusted_chunks) - 1):
current_chunk = adjusted_chunks[i]
next_chunk = adjusted_chunks[i + 1]
current_start, current_end = current_chunk["timestamp"]
next_start, next_end = next_chunk["timestamp"]
pause_duration = next_start - current_end
if pause_duration > 0:
if pause_duration > split_threshold:
distribute = split_threshold / 2
else:
distribute = pause_duration / 2
adjusted_chunks[i]["timestamp"] = (current_start, current_end + distribute)
adjusted_chunks[i + 1]["timestamp"] = (next_start - distribute, next_end)
pipeline_output["chunks"] = adjusted_chunks
return pipeline_output
device = "cuda:0" if torch.cuda.is_available() else "cpu"
torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32
model_id = "nyrahealth/CrisperWhisper"
model = AutoModelForSpeechSeq2Seq.from_pretrained
🌟 Highlights
- 🏆 1st place on the OpenASR Leaderboard in verbatim datasets (TED, AMI)
- 🎓 Accepted at INTERSPEECH 2024.
- 📄 Paper Drop: Check out our paper for details and reasoning behind our tokenizer adjustment.
- ✨ New Feature: Not mentioned in the paper is a added AttentionLoss to further improve timestamp accuracy. By specifically adding a loss to train the attention scores used for the DTW alignment using timestamped data we significantly boosted the alignment performance.
📄 License
This project is licensed under the CC - BY - NC - 4.0 license.