🚀 BioGPT
預訓練語言模型在通用自然語言領域取得巨大成功後,在生物醫學領域也受到了越來越多的關注。在通用語言領域的預訓練語言模型的兩大主要分支中,即BERT(及其變體)和GPT(及其變體),前者在生物醫學領域得到了廣泛研究,如BioBERT和PubMedBERT。雖然它們在各種判別式下游生物醫學任務中取得了巨大成功,但缺乏生成能力限制了它們的應用範圍。本文提出了BioGPT,這是一種在大規模生物醫學文獻上預訓練的特定領域生成式Transformer語言模型。我們在六項生物醫學自然語言處理任務上對BioGPT進行了評估,結果表明我們的模型在大多數任務上優於以往的模型。特別是,我們在BC5CDR、KD - DTI和DDI端到端關係提取任務中分別獲得了44.98%、38.42%和40.76%的F1分數,在PubMedQA上獲得了78.2%的準確率,創造了新的記錄。我們在文本生成方面的案例研究進一步證明了BioGPT在生物醫學文獻方面的優勢,能夠為生物醫學術語生成流暢的描述。
🚀 快速開始
文檔中未提及快速開始相關的具體內容,若有需要可根據實際情況補充。
✨ 主要特性
- 特定領域生成式Transformer語言模型,預訓練於大規模生物醫學文獻。
- 在六項生物醫學自然語言處理任務上進行評估,多數任務表現優於以往模型。
- 在BC5CDR、KD - DTI和DDI端到端關係提取任務及PubMedQA上取得優異成績。
- 能夠為生物醫學術語生成流暢描述。
📚 詳細文檔
模型信息
屬性 |
詳情 |
模型類型 |
特定領域生成式Transformer語言模型 |
訓練數據 |
大規模生物醫學文獻(如PubMed) |
庫名稱 |
transformers |
任務標籤 |
文本生成 |
標籤 |
醫學 |
推理參數
推理時可設置的參數如下:
max_new_tokens
:最大新生成的詞元數量,默認設置為50。
小工具示例
可輸入文本進行推理,例如輸入:"COVID - 19 is" 。
📄 許可證
本項目採用MIT許可證。
📚 引用
如果您在研究中發現BioGPT很有用,請引用以下論文:
@article{10.1093/bib/bbac409,
author = {Luo, Renqian and Sun, Liai and Xia, Yingce and Qin, Tao and Zhang, Sheng and Poon, Hoifung and Liu, Tie-Yan},
title = "{BioGPT: generative pre-trained transformer for biomedical text generation and mining}",
journal = {Briefings in Bioinformatics},
volume = {23},
number = {6},
year = {2022},
month = {09},
abstract = "{Pre-trained language models have attracted increasing attention in the biomedical domain, inspired by their great success in the general natural language domain. Among the two main branches of pre-trained language models in the general language domain, i.e. BERT (and its variants) and GPT (and its variants), the first one has been extensively studied in the biomedical domain, such as BioBERT and PubMedBERT. While they have achieved great success on a variety of discriminative downstream biomedical tasks, the lack of generation ability constrains their application scope. In this paper, we propose BioGPT, a domain-specific generative Transformer language model pre-trained on large-scale biomedical literature. We evaluate BioGPT on six biomedical natural language processing tasks and demonstrate that our model outperforms previous models on most tasks. Especially, we get 44.98\%, 38.42\% and 40.76\% F1 score on BC5CDR, KD-DTI and DDI end-to-end relation extraction tasks, respectively, and 78.2\% accuracy on PubMedQA, creating a new record. Our case study on text generation further demonstrates the advantage of BioGPT on biomedical literature to generate fluent descriptions for biomedical terms.}",
issn = {1477-4054},
doi = {10.1093/bib/bbac409},
url = {https://doi.org/10.1093/bib/bbac409},
note = {bbac409},
eprint = {https://academic.oup.com/bib/article-pdf/23/6/bbac409/47144271/bbac409.pdf},
}