Tf Xlm Roberta Base
XLM-RoBERTa is an extended version of a cross-lingual sentence encoder, trained on 2.5T of data across 100 languages, achieving excellent performance in multiple cross-lingual benchmarks.
Downloads 4,820
Release Time : 3/2/2022
Model Overview
XLM-RoBERTa model for Tensorflow, supporting cross-lingual understanding tasks.
Model Features
Cross-lingual capability
Trained on data from 100 languages, it possesses strong cross-lingual understanding abilities.
Large-scale pre-training
Trained with 2.5T of data, the model has rich linguistic knowledge.
Tensorflow support
A model version specifically optimized for the Tensorflow framework.
Model Capabilities
Cross-lingual text understanding
Text encoding
Multilingual task processing
Use Cases
Natural Language Processing
Cross-lingual text classification
Performing classification tasks on texts in multiple languages.
Achieved state-of-the-art results in multiple cross-lingual benchmarks
Multilingual QA system
Building a QA system that supports multiple languages.
Featured Recommended AI Models
Š 2025AIbase