🚀 事件抽取問答模型
本項目是一個問答模型,作為事件抽取系統的一部分,相關研究發表於ACL2021會議的論文:基於遷移學習的零樣本事件抽取:挑戰與見解。該模型採用預訓練架構 roberta-large,並使用 QAMR 數據集進行微調。
🚀 快速開始
模型演示
若想查看模型的運行效果,可在“託管推理API”右側的文本框中分別輸入問題和上下文。
示例:
- 問題:
誰被殺了?
- 上下文:
警方稱,週四,耶路撒冷市中心一個擁擠的露天市場發生汽車炸彈爆炸,造成至少兩人死亡。
- 答案:
人
模型使用
📚 詳細文檔
BibTeX引用和引用信息
@inproceedings{lyu-etal-2021-zero,
title = "Zero-shot Event Extraction via Transfer Learning: {C}hallenges and Insights",
author = "Lyu, Qing and
Zhang, Hongming and
Sulem, Elior and
Roth, Dan",
booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.acl-short.42",
doi = "10.18653/v1/2021.acl-short.42",
pages = "322--332",
abstract = "Event extraction has long been a challenging task, addressed mostly with supervised methods that require expensive annotation and are not extensible to new event ontologies. In this work, we explore the possibility of zero-shot event extraction by formulating it as a set of Textual Entailment (TE) and/or Question Answering (QA) queries (e.g. {``}A city was attacked{''} entails {``}There is an attack{''}), exploiting pretrained TE/QA models for direct transfer. On ACE-2005 and ERE, our system achieves acceptable results, yet there is still a large gap from supervised approaches, showing that current QA and TE technologies fail in transferring to a different domain. To investigate the reasons behind the gap, we analyze the remaining key challenges, their respective impact, and possible improvement directions.",
}