đ Roberta-Fact-Check Model
The Roberta-Fact-Check Model is a deep - learning solution for text classification. It leverages the Roberta architecture to determine whether a claim is supported or refuted by the provided evidence.
đ Quick Start
The Roberta-Fact-Check Model is straightforward to use. Just input a claim and its corresponding evidence, and it will output a label indicating the relationship between the claim and the evidence. This model can be easily integrated into various applications for fact - checking and misinformation detection.
⨠Features
- Utilizes the Roberta architecture for text classification.
- Classifies claims as either supported or refuted based on evidence.
đĻ Installation
No specific installation steps are provided in the original document, so this section is skipped.
đģ Usage Examples
Basic Usage
import torch
from transformers import RobertaTokenizer, RobertaForSequenceClassification
tokenizer = RobertaTokenizer.from_pretrained('Dzeniks/roberta-fact-check')
model = RobertaForSequenceClassification.from_pretrained('Dzeniks/roberta-fact-check')
claim = "Albert Einstein work in the field of computer science"
evidence = "Albert Einstein was a German-born theoretical physicist, widely acknowledged to be one of the greatest and most influential physicists of all time."
x = tokenizer.encode_plus(claim, evidence, return_tensors="pt")
model.eval()
with torch.no_grad():
prediction = model(**x)
label = torch.argmax(prediction[0]).item()
print(f"Label: {label}")
đ Documentation
Model Training
The model was trained using the Adam optimizer. The hyperparameters were set as follows: a learning rate of 2 - 4e, an epsilon of 1 - 8, and a weight decay of 2 - 8e. The training dataset primarily included the FEVER and Hover datasets, along with a small amount of manually created data.
Input and Output
The model takes a claim and corresponding evidence as input. It returns a label indicating whether the evidence supports or refutes the claim. The two possible labels are:
đ§ Technical Details
The model uses the Roberta architecture for text classification. It is trained on a combination of well - known datasets (FEVER and Hover) and some manually created data. The Adam optimizer is used with specific hyperparameters to fine - tune the model.
đ License
This project is licensed under the MIT license.
Acknowledgements
This model was developed using the Hugging Face transformers library and trained on the FEVER and Hover datasets. We appreciate the developers of these datasets for their contributions to the community.
Disclaimer
Although the Roberta - Fact - Check Model has been trained on a large dataset and can offer accurate results in many scenarios, it may not always yield correct outcomes. Users should exercise caution when making decisions based on the output of any machine learning model.