S

Spanbert Large Finetuned Tacred

Developed by mrm8488
SpanBERT is a pre-trained language model developed by Facebook Research, specifically fine-tuned on the TACRED dataset for relation extraction tasks.
Downloads 26
Release Time : 3/2/2022

Model Overview

SpanBERT is an improved BERT model that enhances pre-training by representing and predicting text spans, making it particularly suitable for tasks requiring text span understanding, such as relation extraction.

Model Features

Improved Pre-training Method
Enhances the model's understanding of text spans by representing and predicting text spans (rather than individual tokens).
High-Performance Relation Extraction
Achieves an F1 score of 70.8 on the TACRED dataset, outperforming the standard BERT model.
Multi-task Adaptation
In addition to relation extraction, it also performs well on tasks like SQuAD and coreference resolution.

Model Capabilities

Relation Extraction
Text Understanding
Question Answering
Coreference Resolution

Use Cases

Information Extraction
Knowledge Base Construction
Extracts relationships between entities from unstructured text.
Achieves an F1 score of 70.8 on the TACRED dataset.
Question Answering
Reading Comprehension
Answers questions based on text passages.
Achieves an F1 score of 88.7 on SQuAD 2.0.
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
Š 2025AIbase