🚀 BioGPT
BioGPT是一个特定领域的生成式Transformer语言模型,它在大规模生物医学文献上进行了预训练。该模型在多个生物医学自然语言处理任务中表现出色,能为生物医学术语生成流畅的描述,拓宽了预训练语言模型在生物医学领域的应用范围。
🚀 快速开始
你可以直接使用文本生成管道来使用这个模型。由于生成过程依赖于一定的随机性,为了保证结果的可重复性,我们设置了一个随机种子:
>>> from transformers import pipeline, set_seed
>>> from transformers import BioGptTokenizer, BioGptForCausalLM
>>> model = BioGptForCausalLM.from_pretrained("microsoft/biogpt")
>>> tokenizer = BioGptTokenizer.from_pretrained("microsoft/biogpt")
>>> generator = pipeline('text-generation', model=model, tokenizer=tokenizer)
>>> set_seed(42)
>>> generator("COVID-19 is", max_length=20, num_return_sequences=5, do_sample=True)
[{'generated_text': 'COVID-19 is a disease that spreads worldwide and is currently found in a growing proportion of the population'},
{'generated_text': 'COVID-19 is one of the largest viral epidemics in the world.'},
{'generated_text': 'COVID-19 is a common condition affecting an estimated 1.1 million people in the United States alone.'},
{'generated_text': 'COVID-19 is a pandemic, the incidence has been increased in a manner similar to that in other'},
{'generated_text': 'COVID-19 is transmitted via droplets, air-borne, or airborne transmission.'}]
✨ 主要特性
受预训练语言模型在通用自然语言领域的巨大成功启发,其在生物医学领域也受到了越来越多的关注。在通用语言领域的预训练语言模型的两个主要分支,即BERT(及其变体)和GPT(及其变体)中,前者在生物医学领域得到了广泛研究,如BioBERT和PubMedBERT。虽然它们在各种判别式下游生物医学任务中取得了巨大成功,但缺乏生成能力限制了它们的应用范围。
BioGPT是一个特定领域的生成式Transformer语言模型,在大规模生物医学文献上进行了预训练。在六个生物医学自然语言处理任务中对BioGPT进行了评估,结果表明该模型在大多数任务上优于以前的模型。特别是,在BC5CDR、KD - DTI和DDI端到端关系提取任务中,分别获得了44.98%、38.42%和40.76%的F1分数,在PubMedQA上获得了78.2%的准确率,创造了新的记录。文本生成的案例研究进一步证明了BioGPT在生物医学文献方面的优势,能够为生物医学术语生成流畅的描述。
💻 使用示例
基础用法
以下是如何在PyTorch中使用该模型获取给定文本特征的示例:
from transformers import BioGptTokenizer, BioGptForCausalLM
tokenizer = BioGptTokenizer.from_pretrained("microsoft/biogpt")
model = BioGptForCausalLM.from_pretrained("microsoft/biogpt")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
高级用法
以下是使用束搜索解码的示例:
import torch
from transformers import BioGptTokenizer, BioGptForCausalLM, set_seed
tokenizer = BioGptTokenizer.from_pretrained("microsoft/biogpt")
model = BioGptForCausalLM.from_pretrained("microsoft/biogpt")
sentence = "COVID-19 is"
inputs = tokenizer(sentence, return_tensors="pt")
set_seed(42)
with torch.no_grad():
beam_output = model.generate(**inputs,
min_length=100,
max_length=1024,
num_beams=5,
early_stopping=True
)
tokenizer.decode(beam_output[0], skip_special_tokens=True)
'COVID-19 is a global pandemic caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), the causative agent of coronavirus disease 2019 (COVID-19), which has spread to more than 200 countries and territories, including the United States (US), Canada, Australia, New Zealand, the United Kingdom (UK), and the United States of America (USA), as of March 11, 2020, with more than 800,000 confirmed cases and more than 800,000 deaths.'
📚 详细文档
如果您在研究中发现BioGPT很有用,请引用以下论文:
@article{10.1093/bib/bbac409,
author = {Luo, Renqian and Sun, Liai and Xia, Yingce and Qin, Tao and Zhang, Sheng and Poon, Hoifung and Liu, Tie-Yan},
title = "{BioGPT: generative pre-trained transformer for biomedical text generation and mining}",
journal = {Briefings in Bioinformatics},
volume = {23},
number = {6},
year = {2022},
month = {09},
abstract = "{Pre-trained language models have attracted increasing attention in the biomedical domain, inspired by their great success in the general natural language domain. Among the two main branches of pre-trained language models in the general language domain, i.e. BERT (and its variants) and GPT (and its variants), the first one has been extensively studied in the biomedical domain, such as BioBERT and PubMedBERT. While they have achieved great success on a variety of discriminative downstream biomedical tasks, the lack of generation ability constrains their application scope. In this paper, we propose BioGPT, a domain-specific generative Transformer language model pre-trained on large-scale biomedical literature. We evaluate BioGPT on six biomedical natural language processing tasks and demonstrate that our model outperforms previous models on most tasks. Especially, we get 44.98\%, 38.42\% and 40.76\% F1 score on BC5CDR, KD-DTI and DDI end-to-end relation extraction tasks, respectively, and 78.2\% accuracy on PubMedQA, creating a new record. Our case study on text generation further demonstrates the advantage of BioGPT on biomedical literature to generate fluent descriptions for biomedical terms.}",
issn = {1477-4054},
doi = {10.1093/bib/bbac409},
url = {https://doi.org/10.1093/bib/bbac409},
note = {bbac409},
eprint = {https://academic.oup.com/bib/article-pdf/23/6/bbac409/47144271/bbac409.pdf},
}
📄 许可证
本项目采用MIT许可证。