Distilbert Base Multilingual Cased Toxicity
A multilingual text toxicity classification model trained on the JIGSAW Toxic Comment Classification Challenge dataset, supporting 10 languages.
Downloads 12.69k
Release Time : 3/2/2022
Model Overview
This model is used to detect toxic content in text, based on the Distil-Bert architecture, optimized for multilingual environments, suitable for scenarios such as content moderation.
Model Features
Multilingual Support
Supports toxicity detection in 10 major European languages.
Efficient and Lightweight
Based on the Distil-Bert architecture, reducing computational resource requirements while maintaining performance.
High Accuracy
Achieves 94.25% accuracy on the JIGSAW dataset.
Model Capabilities
Text Toxicity Detection
Multilingual Text Classification
Content Moderation
Use Cases
Content Moderation
Social Media Comment Filtering
Automatically identifies and filters toxic comments on social media.
Accuracy 94.25%, F1 score 0.945
Online Community Management
Helps community administrators identify inappropriate remarks.
Featured Recommended AI Models