W

Wav2vec2 Large Baltic Voxpopuli V2

Developed by facebook
Facebook's Wav2Vec2 large model, pre-trained on 27.5 hours of unlabeled data from the Baltic language subset of the VoxPopuli corpus.
Downloads 25
Release Time : 3/2/2022

Model Overview

This model is a speech processing model based on the Wav2Vec2 architecture, specifically pre-trained for Baltic languages and suitable for speech recognition tasks.

Model Features

Baltic Language Pre-training
Specifically pre-trained on 27.5 hours of unlabeled data for Baltic languages, making it suitable for speech recognition tasks in these languages.
16kHz Audio Sampling
The model uses a 16kHz sampling rate for speech audio during pre-training. Ensure input speech data is also sampled at 16kHz.
Unsupervised Pre-training
The model is pre-trained on unlabeled data, making it suitable for semi-supervised learning and representation learning tasks.

Model Capabilities

Automatic Speech Recognition
Speech Representation Learning

Use Cases

Speech Recognition
Baltic Language Speech-to-Text
Convert speech audio in Baltic languages to text
Speech Research
Speech Representation Learning
Used for research on representation learning of speech signals
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
Š 2025AIbase