X

Xlm Roberta Large Tydip

Developed by Genius1237
A multilingual politeness classification model based on the xlm-roberta-large architecture, fine-tuned on the English subset of the TyDiP dataset, supporting politeness judgment in 10 languages
Downloads 929
Release Time : 4/20/2023

Model Overview

This model is used to determine the politeness level (polite/impolite) of text, specifically designed for multilingual scenarios, and performs excellently in English and 9 other languages

Model Features

Multilingual Support
Supports politeness classification in 10 languages, including non-Latin scripts like Hindi and Korean
High Accuracy
Achieves 0.892 accuracy on the English test set, with good performance in other languages
Cross-lingual Capability
Based on the XLMR architecture, it has excellent cross-lingual transfer capabilities and may be applicable to more languages

Model Capabilities

Multilingual Text Classification
Politeness Judgment
Cross-lingual Transfer Learning

Use Cases

Social Media Analysis
Comment Politeness Filtering
Automatically identifies the politeness level of social media comments
Helps filter out impolite content
Customer Service Systems
Customer Service Response Quality Monitoring
Evaluates the politeness level of customer service responses
Improves customer service quality
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase