X

Xlm Roberta Xl

Developed by facebook
XLM-RoBERTa-XL is a multilingual model pre-trained on 2.5TB of filtered CommonCrawl data, covering 100 languages.
Downloads 53.53k
Release Time : 3/2/2022

Model Overview

XLM-RoBERTa-XL is an extra-large multilingual version of RoBERTa, pre-trained using the Masked Language Modeling (MLM) objective, primarily for fine-tuning downstream tasks.

Model Features

Multilingual Support
Supports pre-training and fine-tuning for 100 languages
Large-Scale Pre-training
Pre-trained on 2.5TB of filtered CommonCrawl data
Masked Language Modeling
Pre-trained using MLM objective to predict masked words

Model Capabilities

Multilingual text understanding
Masked language prediction
Downstream task fine-tuning

Use Cases

Natural Language Processing
Sequence Classification
Can be used for text classification tasks
Token Classification
Suitable for tasks like named entity recognition
Question Answering
Can be used to build multilingual QA systems
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
Š 2025AIbase