Xlmr Large Toxicity Classifier V2
A binary toxicity classifier fine-tuned on xlm-roberta-large, supporting toxicity detection in 15 languages
Downloads 850
Release Time : 3/19/2025
Model Overview
This model is specifically designed to detect toxic content in text, supporting 15 different languages, suitable for content moderation, social media monitoring, and similar scenarios.
Model Features
Multilingual Support
Supports toxicity detection in 15 different language families, including major languages such as English, Chinese, Russian, etc.
High Accuracy
Achieves high F1 scores in multiple languages, e.g., 0.9225 for English, 0.9525 for Russian, etc.
Latest Dataset
Trained on a multilingual toxicity dataset updated in 2025, covering the latest linguistic phenomena
Model Capabilities
Text Classification
Multilingual Processing
Toxic Content Detection
Use Cases
Content Moderation
Social Media Content Filtering
Automatically identifies and filters toxic comments on social media
Improves platform content quality and reduces manual moderation costs
Online Community Management
Multilingual Forum Management
Automatically detects inappropriate remarks in multilingual forums
Maintains a healthy community environment
Featured Recommended AI Models
Š 2025AIbase