🚀 卡纳达语混合代码冒犯性内容检测模型
本模型用于检测卡纳达语混合代码语言中的冒犯性内容。名称中的“mono”指单语设置,即该模型仅使用卡纳达语(纯语言和混合代码)数据进行训练。模型权重初始化为预训练的XLM - Roberta - Base,并在使用交叉熵损失进行微调之前,在目标数据集上通过掩码语言建模进行预训练。
该模型是为EACL 2021达罗毗荼语系语言冒犯性语言识别共享任务训练的多个模型中表现最优的。基于遗传算法的集成测试预测在排行榜上获得了第二高的加权F1分数(保留测试集上的加权F1分数:本模型 - 0.73,集成模型 - 0.74)。
📚 详细文档
关于我们的论文详情
Debjoy Saha、Naman Paharia、Debajit Chakraborty、Punyajoy Saha、Animesh Mukherjee发表了论文“[Hate - Alert@DravidianLangTech - EACL2021: Ensembling strategies for Transformer - based Offensive language Detection](https://www.aclweb.org/anthology/2021.dravidianlangtech - 1.38/)”。
⚠️ 重要提示
请在任何使用这些资源的已发表作品中引用我们的论文。
论文引用格式
@inproceedings{saha-etal-2021-hate,
title = "Hate-Alert@{D}ravidian{L}ang{T}ech-{EACL}2021: Ensembling strategies for Transformer-based Offensive language Detection",
author = "Saha, Debjoy and Paharia, Naman and Chakraborty, Debajit and Saha, Punyajoy and Mukherjee, Animesh",
booktitle = "Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages",
month = apr,
year = "2021",
address = "Kyiv",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2021.dravidianlangtech-1.38",
pages = "270--276",
abstract = "Social media often acts as breeding grounds for different forms of offensive content. For low resource languages like Tamil, the situation is more complex due to the poor performance of multilingual or language-specific models and lack of proper benchmark datasets. Based on this shared task {``}Offensive Language Identification in Dravidian Languages{''} at EACL 2021; we present an exhaustive exploration of different transformer models, We also provide a genetic algorithm technique for ensembling different models. Our ensembled models trained separately for each language secured the first position in Tamil, the second position in Kannada, and the first position in Malayalam sub-tasks. The models and codes are provided.",
}
📄 许可证
本项目采用Apache - 2.0许可证。