đ xlm-roberta-large-pooled-cap-minor
An xlm-roberta-large
model fine - tuned for multilingual text classification, leveraging minor topic codes from the Comparative Agendas Project.
đ Quick Start
Basic Usage
from transformers import AutoTokenizer, pipeline
tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-large")
pipe = pipeline(
model="poltextlab/xlm-roberta-large-pooled-cap-minor",
task="text-classification",
tokenizer=tokenizer,
use_fast=False,
token="<your_hf_read_only_token>"
)
text = "We will place an immediate 6 - month halt on the finance driven closure of beds and wards, and set up an independent audit of needs and facilities."
pipe(text)
Gated Access
â ī¸ Important Note
Due to the gated access, you must pass the token
parameter when loading the model. In earlier versions of the Transformers package, you may need to use the use_auth_token
parameter instead.
⨠Features
- Multilingual Support: Finetuned on multilingual (English, Danish) training data.
- Zero - Shot Classification: Suitable for zero - shot classification tasks.
- Text Classification: Specifically designed for text classification with high accuracy.
đĻ Installation
This architecture uses the sentencepiece
tokenizer. In order to run the model before transformers==4.27
, you need to install it manually.
đ Documentation
Model Description
An xlm - roberta - large
model finetuned on multilingual (English, Danish) training data labelled with [minor topic codes](https://www.comparativeagendas.net/pages/master - codebook) from the Comparative Agendas Project.
Model Performance
The model was evaluated on a test set of 15 349 English examples (20% of the English data).
- Accuracy: 0.65
- Weighted Average F1 - score: 0.64
Inference Platform
This model is used by the CAP Babel Machine, an open - source and free natural language processing tool, designed to simplify and speed up projects for comparative research.
Cooperation
Model performance can be significantly improved by extending our training sets. We appreciate every submission of CAP - coded corpora (of any domain and language) at poltextlab{at}poltextlab{dot}com or by using the CAP Babel Machine.
Debugging and Issues
If you encounter a RuntimeError
when loading the model using the from_pretrained()
method, adding ignore_mismatched_sizes = True
should solve the issue.
đ License
This project is licensed under the MIT License.
Property |
Details |
Model Type |
xlm - roberta - large finetuned for text classification |
Training Data |
Multilingual (English, Danish) data labelled with minor topic codes from the Comparative Agendas Project |
Metrics |
Accuracy, Weighted Average F1 - score |
Tags |
zero - shot - classification, text - classification, pytorch |
Gated Access Prompt |
Our models are intended for academic use only. If you are not affiliated with an academic institution, please provide a rationale for using our models. Please allow us a few business days to manually review subscriptions. |
Gated Access Fields |
Name (text), Country (country), Institution (text), Institution Email (text), Please specify your academic use case (text) |