X

Xlm Roberta Xxl

Developed by facebook
XLM-RoBERTa-XL is a multilingual model pretrained on 2.5TB of filtered CommonCrawl data covering 100 languages, based on an extra-large version of the RoBERTa architecture.
Downloads 13.19k
Release Time : 3/2/2022

Model Overview

This model was pretrained on 100 languages using masked language modeling (MLM) objectives, primarily for text feature extraction and fine-tuning downstream tasks.

Model Features

Multilingual support
Supports pretraining and feature extraction for 100 languages
Large-scale pretraining
Pretrained on 2.5TB of filtered CommonCrawl data
RoBERTa architecture
Uses an improved RoBERTa architecture with optimized training process

Model Capabilities

Masked language modeling
Multilingual text feature extraction
Downstream task fine-tuning

Use Cases

Natural Language Processing
Sequence classification
Can be used for text classification tasks like sentiment analysis
Token classification
Suitable for tasks like named entity recognition
Question answering
Can be used to build multilingual QA systems
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
Š 2025AIbase