X

Xlm Roberta Large English Cap Minor Platforms

Developed by poltextlab
Multilingual text classification model based on xlm-roberta-large architecture, specifically designed for minor topic coding in the Comparative Agendas Project
Downloads 18
Release Time : 4/8/2025

Model Overview

This model is fine-tuned on English training data labeled with minor topic codes from the Comparative Agendas Project, primarily used for zero-shot text classification tasks.

Model Features

Multilingual support
Based on XLM-RoBERTa architecture, capable of processing multilingual texts
Minor topic coding
Optimized specifically for minor topic coding tasks in the Comparative Agendas Project
Academic use only
Primarily intended for academic research purposes, non-academic use requires special application

Model Capabilities

Zero-shot text classification
Multilingual text processing
Minor topic recognition

Use Cases

Political text analysis
Policy document classification
Performing minor topic coding on government policy documents
Accuracy 0.39, weighted F1 score 0.3
Comparative political research
Supporting cross-national comparative analysis of political agendas
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
Š 2025AIbase