Sn Xlm Roberta Base Snli Mnli Anli Xnli
A dual-tower network model trained for zero-shot and few-shot text classification, supporting multilingual sentence embedding
Text Embedding
Transformers Supports Multiple Languages#Multilingual sentence embedding#Zero-shot classification#Dual-tower network

Downloads 801
Release Time : 3/2/2022
Model Overview
This model is based on the xlm-roberta-base architecture, specifically designed for zero-shot and few-shot text classification tasks. It can map sentences and paragraphs into a 768-dimensional dense vector space and supports sentence similarity calculations in 13 languages.
Model Features
Multilingual support
Supports sentence embedding calculations in 13 languages
Zero-shot learning
Capable of classification tasks without domain-specific training data
Few-shot adaptation
Adapts to new tasks with only a small number of samples
Efficient vector representation
Converts text into 768-dimensional dense vectors while preserving semantic information
Model Capabilities
Zero-shot text classification
Few-shot learning
Sentence similarity calculation
Multilingual text processing
Semantic feature extraction
Use Cases
Text classification
Multilingual content classification
Automatically classify multilingual content without the need to train separate models for each language
High-accuracy zero-shot classification capability
Information retrieval
Cross-lingual document retrieval
Find semantically similar content across documents in different languages
Retrieval results based on semantics rather than keywords
Semantic analysis
Multilingual semantic similarity calculation
Calculate semantic similarity between sentences in different languages
Accurate cross-lingual semantic matching
Featured Recommended AI Models