X

Xlm Roberta Large Twitter Cap Minor

Developed by poltextlab
A multilingual text classification model based on the xlm-roberta-large architecture, specifically designed for minor topic coding in the Comparative Agendas Project.
Downloads 21
Release Time : 5/8/2025

Model Overview

This model is fine-tuned on multilingual (English, Danish, Hungarian) training data labeled with Comparative Agendas Project minor topic codes, suitable for zero-shot text classification tasks.

Model Features

Multilingual Support
Supports text classification in three languages: English, Danish, and Hungarian.
Zero-shot Classification
Capable of classifying text on unseen categories, suitable for diverse application scenarios.
Academic Use Only
The model is primarily intended for academic use; non-academic institutions require authorization for use.

Model Capabilities

Multilingual Text Classification
Zero-shot Learning

Use Cases

Policy Analysis
Political Agenda Analysis
Analyze minor topics in political texts for Comparative Agendas Project research.
Achieved 0.67 accuracy and 0.61 weighted average F1 score on the English test set.
Social Science Research
Cross-lingual Text Classification
Automatically classify and identify topics in multilingual social science texts.
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
Š 2025AIbase