🚀 BioGPT
BioGPT是一個特定領域的生成式Transformer語言模型,它在大規模生物醫學文獻上進行了預訓練。受預訓練語言模型在通用自然語言領域取得巨大成功的啟發,其在生物醫學領域也備受關注。BioGPT在多個生物醫學自然語言處理任務上表現出色,能為生物醫學術語生成流暢描述,拓展了預訓練語言模型在生物醫學領域的應用範圍。
🚀 快速開始
預訓練語言模型在通用自然語言領域取得巨大成功後,在生物醫學領域也受到了越來越多的關注。在通用語言領域的預訓練語言模型的兩大主要分支中,即BERT(及其變體)和GPT(及其變體),前者在生物醫學領域得到了廣泛研究,如BioBERT和PubMedBERT。雖然它們在各種判別式下游生物醫學任務上取得了巨大成功,但缺乏生成能力限制了它們的應用範圍。
在本文中,我們提出了BioGPT,這是一個在大規模生物醫學文獻上進行預訓練的特定領域生成式Transformer語言模型。我們在六項生物醫學自然語言處理任務上對BioGPT進行了評估,結果表明我們的模型在大多數任務上優於以前的模型。特別是,我們在BC5CDR、KD - DTI和DDI端到端關係提取任務上分別獲得了44.98%、38.42%和40.76%的F1分數,在PubMedQA上獲得了78.2%的準確率,創造了新的記錄。我們在文本生成方面的案例研究進一步證明了BioGPT在生物醫學文獻方面的優勢,能夠為生物醫學術語生成流暢的描述。
📄 許可證
本項目採用MIT許可證。
📚 詳細文檔
模型信息
屬性 |
詳情 |
模型類型 |
特定領域生成式Transformer語言模型 |
訓練數據 |
大規模生物醫學文獻 |
評估指標 |
準確率、F1分數 |
適用任務 |
生物醫學自然語言處理任務(如關係提取、問答等) |
推理參數
max_new_tokens
: 250
do_sample
: False
示例輸入
{
"text": "question: Can 'high-risk' human papillomaviruses (HPVs) be detected in human breast milk? context: Using polymerase chain reaction techniques, we evaluated the presence of HPV infection in human breast milk collected from 21 HPV-positive and 11 HPV-negative mothers. Of the 32 studied human milk specimens, no 'high-risk' HPV 16, 18, 31, 33, 35, 39, 45, 51, 52, 56, 58 or 58 DNA was detected. answer: This preliminary case-control study indicates the absence of mucosal 'high-risk' HPV types in human breast milk."
}
📚 引用
如果您在研究中發現BioGPT很有用,請引用以下論文:
@article{10.1093/bib/bbac409,
author = {Luo, Renqian and Sun, Liai and Xia, Yingce and Qin, Tao and Zhang, Sheng and Poon, Hoifung and Liu, Tie-Yan},
title = "{BioGPT: generative pre-trained transformer for biomedical text generation and mining}",
journal = {Briefings in Bioinformatics},
volume = {23},
number = {6},
year = {2022},
month = {09},
abstract = "{Pre-trained language models have attracted increasing attention in the biomedical domain, inspired by their great success in the general natural language domain. Among the two main branches of pre-trained language models in the general language domain, i.e. BERT (and its variants) and GPT (and its variants), the first one has been extensively studied in the biomedical domain, such as BioBERT and PubMedBERT. While they have achieved great success on a variety of discriminative downstream biomedical tasks, the lack of generation ability constrains their application scope. In this paper, we propose BioGPT, a domain-specific generative Transformer language model pre-trained on large-scale biomedical literature. We evaluate BioGPT on six biomedical natural language processing tasks and demonstrate that our model outperforms previous models on most tasks. Especially, we get 44.98\%, 38.42\% and 40.76\% F1 score on BC5CDR, KD-DTI and DDI end-to-end relation extraction tasks, respectively, and 78.2\% accuracy on PubMedQA, creating a new record. Our case study on text generation further demonstrates the advantage of BioGPT on biomedical literature to generate fluent descriptions for biomedical terms.}",
issn = {1477-4054},
doi = {10.1093/bib/bbac409},
url = {https://doi.org/10.1093/bib/bbac409},
note = {bbac409},
eprint = {https://academic.oup.com/bib/article-pdf/23/6/bbac409/47144271/bbac409.pdf},
}