D

Data2vec Text Base

Developed by facebook
A general self-supervised learning framework pre-trained on English language using the data2vec objective, handling different modality tasks through a unified approach
Downloads 1,796
Release Time : 3/2/2022

Model Overview

A Transformer model based on self-distillation architecture that achieves cross-modal self-supervised learning by predicting latent representations of complete input data, suitable for natural language understanding tasks

Model Features

Cross-modal Unified Framework
First to achieve unified self-supervised learning for speech/vision/text using the same architecture and objective function
Contextual Representation Prediction
Unlike traditional methods predicting local tokens, it directly learns to predict latent representations containing global information
Self-distillation Architecture
Predicts latent representations of complete inputs through masked input views, achieving knowledge distillation

Model Capabilities

Text representation learning
Sequence classification
Token classification
Question answering system

Use Cases

Text Understanding
Sentiment Analysis
Classifies sentiment orientation of entire sentences
Competitive performance on GLUE benchmark
Named Entity Recognition
Identifies entities such as person names/locations/organizations in text
Question Answering
Reading Comprehension
Answers relevant questions based on given articles
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
Š 2025AIbase