Roberta Large Semeval2012 V2 Mask Prompt E Nce
A relation understanding model fine-tuned based on RoBERTa-large, specializing in lexical relation classification and analogy question answering
Downloads 16
Release Time : 8/18/2022
Model Overview
This model is based on the RoBERTa-large architecture and fine-tuned on the SemEval2012 relation similarity dataset. It is primarily used for relation understanding tasks, including lexical relation classification and analogy question answering.
Model Features
Efficient Relation Understanding
Performs excellently on multiple relation understanding tasks, especially achieving high F1 scores in lexical relation classification tasks
Multi-Task Adaptability
Capable of handling various relation understanding tasks, including relation mapping, lexical relation classification, and analogy question answering
Fine-Tuned Based on RoBERTa-large
Utilizes the powerful RoBERTa-large pre-trained model as a foundation, achieving outstanding performance through task-specific fine-tuning
Model Capabilities
Lexical relation classification
Analogy question answering
Relation mapping
Relation similarity calculation
Use Cases
Education
Analogy Question Answering
Used to answer analogy questions similar to those in SAT exams
Achieves an accuracy of 60.96% on the full SAT version
Natural Language Processing
Lexical Relation Classification
Classifies semantic relations between words
Achieves an F1 score of 92.65% on the BLESS dataset
Featured Recommended AI Models