X

Xlm Roberta Large

Developed by FacebookAI
XLM-RoBERTa is a multilingual model pretrained on 2.5TB of filtered CommonCrawl data across 100 languages, trained with a masked language modeling objective.
Downloads 5.3M
Release Time : 3/2/2022

Model Overview

XLM-RoBERTa is the multilingual version of RoBERTa, supporting 100 languages, primarily used for text feature extraction and fine-tuning downstream tasks.

Model Features

Multilingual Support
Supports 100 languages, suitable for multilingual text processing tasks.
Large-scale Pretraining
Pretrained on 2.5TB of filtered CommonCrawl data, offering robust language understanding capabilities.
Masked Language Modeling
Trained with a masked language modeling objective, enabling bidirectional sentence representation learning.

Model Capabilities

Text Feature Extraction
Masked Language Modeling
Multilingual Text Processing

Use Cases

Natural Language Processing
Sequence Classification
Can be used for sentiment analysis, text classification, and similar tasks.
Token Classification
Suitable for named entity recognition, part-of-speech tagging, and similar tasks.
Question Answering
Can be used to build multilingual question-answering systems.
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
Š 2025AIbase