Xlm Roberta Base Snli Mnli Anli Xnli
A multilingual NLI model based on XLM-RoBERTa, specifically designed for zero-shot and few-shot text classification tasks.
Text Classification
Transformers Supports Multiple Languages#Zero-shot classification#Multilingual NLI#Cross-attention

Downloads 320
Release Time : 3/2/2022
Model Overview
This cross-attention model is trained on multiple natural language inference datasets and supports zero-shot and few-shot text classification tasks in multiple languages.
Model Features
Multilingual support
Supports zero-shot and few-shot text classification tasks in 14 languages
Cross-attention architecture
Employs cross-attention mechanisms to handle text pair relationships, suitable for NLI tasks
Multi-dataset training
Trained on four mainstream NLI datasets: SNLI, MNLI, ANLI, and XNLI
Model Capabilities
Zero-shot text classification
Few-shot text classification
Multilingual text understanding
Natural language inference
Use Cases
Sentiment analysis
Zero-shot sentiment classification
Determines sentiment polarity without requiring specific training data
As shown in the example, it can accurately identify positive/negative sentiment
Content classification
Multilingual content classification
Performs zero-shot classification on texts in multiple languages
Featured Recommended AI Models