🚀 bert-large-finetuned-squad2
該模型基於bert-large-uncased微調而來,在SQuAD2.0數據集上表現出色,可用於問答任務,為相關領域提供了強大的支持。
🚀 快速開始
本模型基於 bert-large-uncased 構建,並在 SQuAD2.0 數據集上進行了微調。你可以在 這裡(模型) 和 這裡(數據) 找到對應的論文。
✨ 主要特性
- 語言:英文
- 標籤:pytorch、question-answering
- 數據集:squad2
- 評估指標:exact、f1
屬性 |
詳情 |
模型類型 |
基於bert-large-uncased微調的問答模型 |
訓練數據 |
SQuAD2.0 |
💻 使用示例
基礎用法
from transformers.pipelines import pipeline
model_name = "phiyodr/bert-large-finetuned-squad2"
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
inputs = {
'question': 'What discipline did Winkelmann create?',
'context': 'Johann Joachim Winckelmann was a German art historian and archaeologist. He was a pioneering Hellenist who first articulated the difference between Greek, Greco-Roman and Roman art. "The prophet and founding hero of modern archaeology", Winckelmann was one of the founders of scientific archaeology and first applied the categories of style on a large, systematic basis to the history of art. '
}
nlp(inputs)
🔧 技術細節
訓練流程
{
"base_model": "bert-large-uncased",
"do_lower_case": True,
"learning_rate": 3e-5,
"num_train_epochs": 4,
"max_seq_length": 384,
"doc_stride": 128,
"max_query_length": 64,
"batch_size": 96
}
評估結果
{
"exact": 76.22336393497852,
"f1": 79.72527570261339,
"total": 11873,
"HasAns_exact": 76.19770580296895,
"HasAns_f1": 83.21157193271408,
"HasAns_total": 5928,
"NoAns_exact": 76.24894869638352,
"NoAns_f1": 76.24894869638352,
"NoAns_total": 5945
}