X

Xlm Roberta Base

Developed by FacebookAI
XLM-RoBERTa is a multilingual model pretrained on 2.5TB of filtered CommonCrawl data across 100 languages, using masked language modeling as the training objective.
Downloads 9.6M
Release Time : 3/2/2022

Model Overview

XLM-RoBERTa is the multilingual version of RoBERTa, supporting 100 languages, primarily used for text feature extraction and downstream task fine-tuning.

Model Features

Multilingual Support
Supports 100 languages, suitable for multilingual text processing tasks.
Large-scale Pretraining
Pretrained on 2.5TB of filtered CommonCrawl data, with strong language representation capabilities.
Masked Language Modeling
Trained with masked language modeling objectives, capable of learning bidirectional sentence representations.

Model Capabilities

Text Feature Extraction
Masked Language Modeling
Multilingual Text Processing
Downstream Task Fine-tuning

Use Cases

Natural Language Processing
Sequence Classification
Can be used for sentiment analysis, text classification, and similar tasks.
Token Classification
Suitable for tasks like Named Entity Recognition (NER).
Question Answering
Can be used to build multilingual question answering systems.
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
Š 2025AIbase