D

Distil Large V3 Ct2

Developed by distil-whisper
Distil-Whisper is a distilled version of the Whisper model, optimized for long-form transcription, offering faster inference speed and improved word error rate (WER) performance.
Downloads 58
Release Time : 3/21/2024

Model Overview

This model is the distil-large-v3 weights converted to CTranslate2 format, specifically designed to be compatible with OpenAI Whisper's long-form transcription algorithm, achieving an average 5% improvement in word error rate (WER) compared to previous versions.

Model Features

Efficient Inference
Fast inference enabled by the CTranslate2 engine, suitable for real-time speech recognition applications.
Long-form Optimization
Specially designed to be compatible with OpenAI Whisper's long-form transcription algorithm, delivering better performance with long audio files.
Performance Improvement
Compared to the distil-large-v2 version, it achieves an average 5% improvement in word error rate (WER) across 4 out-of-distribution datasets.

Model Capabilities

English speech recognition
Long audio transcription
Real-time speech-to-text

Use Cases

Speech Transcription
Meeting Minutes
Automatically convert meeting recordings into text transcripts.
High accuracy, supports long-duration recordings.
Podcast Transcription
Convert podcast audio content into searchable text.
Excellent performance with long audio files.
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
Š 2025AIbase