Spanbert Large Finetuned Tacred
SpanBERT is a pre-trained language model developed by Facebook Research, specifically fine-tuned on the TACRED dataset for relation extraction tasks.
Downloads 26
Release Time : 3/2/2022
Model Overview
SpanBERT is an improved BERT model that enhances pre-training by representing and predicting text spans, making it particularly suitable for tasks requiring text span understanding, such as relation extraction.
Model Features
Improved Pre-training Method
Enhances the model's understanding of text spans by representing and predicting text spans (rather than individual tokens).
High-Performance Relation Extraction
Achieves an F1 score of 70.8 on the TACRED dataset, outperforming the standard BERT model.
Multi-task Adaptation
In addition to relation extraction, it also performs well on tasks like SQuAD and coreference resolution.
Model Capabilities
Relation Extraction
Text Understanding
Question Answering
Coreference Resolution
Use Cases
Information Extraction
Knowledge Base Construction
Extracts relationships between entities from unstructured text.
Achieves an F1 score of 70.8 on the TACRED dataset.
Question Answering
Reading Comprehension
Answers questions based on text passages.
Achieves an F1 score of 88.7 on SQuAD 2.0.
Featured Recommended AI Models
Š 2025AIbase