P

Paraformer Large

Developed by funasr
Paraformer is an innovative non-autoregressive end-to-end speech recognition model with significant advantages over traditional autoregressive models. It can generate entire target sentences in parallel, making it particularly suitable for GPU-accelerated parallel inference.
Downloads 43
Release Time : 4/17/2023

Model Overview

Paraformer is an efficient non-autoregressive end-to-end speech recognition model that achieves performance comparable to autoregressive models on industrial-grade data while significantly improving inference efficiency.

Model Features

Parallel Inference
Capable of generating entire target sentences in parallel, making it particularly suitable for GPU-accelerated parallel inference, significantly improving inference efficiency.
Efficient Inference
Compared to traditional autoregressive models, it can reduce machine costs for speech recognition cloud services by nearly 10 times.
High Performance
Achieves performance comparable to autoregressive models on industrial-grade data.
Industrial Applications
Trained on a 60,000-hour Mandarin dataset, suitable for industrial-grade application scenarios.

Model Capabilities

Mandarin speech recognition
High-precision text conversion
Batch speech processing

Use Cases

Speech Transcription Services
Cloud Speech Recognition Service
Provides efficient speech recognition capabilities for cloud services
Reduces machine costs by nearly 10 times
Intelligent Customer Service
Customer Service Call Analysis
Real-time transcription of customer service calls
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
Š 2025AIbase