T

Tf Xlm Roberta Large

Developed by jplu
XLM-RoBERTa is a large-scale cross-lingual sentence encoder, trained on 2.5TB of data across 100 languages, achieving excellent performance in multiple cross-lingual benchmarks.
Downloads 236
Release Time : 3/2/2022

Model Overview

TensorFlow implementation of the XLM-RoBERTa model, supporting cross-lingual text understanding and processing tasks.

Model Features

Cross-lingual capability
Supports text understanding and processing in 100 languages
Large-scale pre-training
Trained on 2.5TB of multilingual data
TensorFlow implementation
Provides TensorFlow version of model weights

Model Capabilities

Cross-lingual text understanding
Sentence encoding
Text feature extraction

Use Cases

Natural Language Processing
Cross-lingual text classification
Classify texts in different languages
Achieved excellent performance in multiple cross-lingual benchmarks
Multilingual semantic search
Build a semantic search engine supporting multiple languages
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
Š 2025AIbase