đ BioGPT
Pre-trained language models have achieved remarkable success in the general natural language domain, and their application in the biomedical field has also attracted increasing attention. BioGPT is a domain-specific generative Transformer language model pre-trained on large-scale biomedical literature, aiming to address various biomedical natural language processing tasks.
đ Quick Start
The details about quick start are not provided in the original document, so this section is skipped.
⨠Features
- Domain-Specific Pre-training: Pre-trained on large - scale biomedical literature, enabling it to better understand and process biomedical text.
- Generative Ability: Unlike some previous models in the biomedical domain, BioGPT has strong text generation capabilities.
- High Performance: Outperforms previous models on most of the six evaluated biomedical natural language processing tasks. For example, it achieves 44.98%, 38.42% and 40.76% F1 score on BC5CDR, KD - DTI and DDI end - to - end relation extraction tasks respectively, and 78.2% accuracy on PubMedQA.
đĻ Installation
The installation steps are not provided in the original document, so this section is skipped.
đģ Usage Examples
The usage examples are not provided in the original document, so this section is skipped.
đ Documentation
Pre - trained language models have attracted increasing attention in the biomedical domain, inspired by their great success in the general natural language domain. Among the two main branches of pre - trained language models in the general language domain, i.e. BERT (and its variants) and GPT (and its variants), the first one has been extensively studied in the biomedical domain, such as BioBERT and PubMedBERT. While they have achieved great success on a variety of discriminative downstream biomedical tasks, the lack of generation ability constrains their application scope.
In this paper, the authors propose BioGPT, a domain - specific generative Transformer language model pre - trained on large - scale biomedical literature. They evaluate BioGPT on six biomedical natural language processing tasks and demonstrate that their model outperforms previous models on most tasks. Especially, it gets 44.98%, 38.42% and 40.76% F1 score on BC5CDR, KD - DTI and DDI end - to - end relation extraction tasks, respectively, and 78.2% accuracy on PubMedQA, creating a new record. The case study on text generation further demonstrates the advantage of BioGPT on biomedical literature to generate fluent descriptions for biomedical terms.
đ§ Technical Details
The technical details are not provided in the original document, so this section is skipped.
đ License
This project is licensed under the MIT license.
đ Citation
If you find BioGPT useful in your research, please cite the following paper:
@article{10.1093/bib/bbac409,
author = {Luo, Renqian and Sun, Liai and Xia, Yingce and Qin, Tao and Zhang, Sheng and Poon, Hoifung and Liu, Tie-Yan},
title = "{BioGPT: generative pre-trained transformer for biomedical text generation and mining}",
journal = {Briefings in Bioinformatics},
volume = {23},
number = {6},
year = {2022},
month = {09},
abstract = "{Pre-trained language models have attracted increasing attention in the biomedical domain, inspired by their great success in the general natural language domain. Among the two main branches of pre-trained language models in the general language domain, i.e. BERT (and its variants) and GPT (and its variants), the first one has been extensively studied in the biomedical domain, such as BioBERT and PubMedBERT. While they have achieved great success on a variety of discriminative downstream biomedical tasks, the lack of generation ability constrains their application scope. In this paper, we propose BioGPT, a domain-specific generative Transformer language model pre-trained on large-scale biomedical literature. We evaluate BioGPT on six biomedical natural language processing tasks and demonstrate that our model outperforms previous models on most tasks. Especially, we get 44.98\%, 38.42\% and 40.76\% F1 score on BC5CDR, KD-DTI and DDI end-to-end relation extraction tasks, respectively, and 78.2\% accuracy on PubMedQA, creating a new record. Our case study on text generation further demonstrates the advantage of BioGPT on biomedical literature to generate fluent descriptions for biomedical terms.}",
issn = {1477-4054},
doi = {10.1093/bib/bbac409},
url = {https://doi.org/10.1093/bib/bbac409},
note = {bbac409},
eprint = {https://academic.oup.com/bib/article-pdf/23/6/bbac409/47144271/bbac409.pdf},
}
Property |
Details |
Model Type |
Transformer-based generative language model |
Training Data |
Large-scale biomedical literature |
Datasets |
pubmed_qa |
Metrics |
accuracy |
Library Name |
transformers |
Pipeline Tag |
text-generation |
Tags |
medical |