F

Faster Distil Whisper Large V3.5

Developed by Purfview
Distil-Whisper is a distilled version of the Whisper model, optimized for Automatic Speech Recognition (ASR) tasks, offering faster inference speeds.
Downloads 565
Release Time : 4/6/2025

Model Overview

This is a Distil-Large-v3.5 model converted to CTranslate2 format, designed for efficient speech recognition, suitable for applications requiring fast transcription.

Model Features

Efficient Inference
Utilizes the CTranslate2 engine for fast speech recognition, more efficient than the original Whisper model.
Knowledge Distillation
Distills knowledge from the Whisper model through large-scale pseudo-labeling techniques, maintaining high accuracy.
Hardware Acceleration
Supports GPU acceleration and different precision computations (e.g., float16), optimizing inference speed.

Model Capabilities

English speech recognition
Audio transcription
Long audio processing support
Adjustable recognition accuracy

Use Cases

Speech Transcription
Meeting Minutes
Automatically transcribe meeting recordings
Quickly generate meeting transcripts
Podcast Transcription
Convert podcast content into text
Facilitates content search and archiving
Assistive Tools
Subtitle Generation
Automatically generate subtitles for video content
Enhances video accessibility
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
Š 2025AIbase