🚀 transformers
transformers庫提供了方便的工具來使用預訓練的模型,本示例展示瞭如何使用ClinicalT5-base模型進行相關操作,為臨床文本處理提供了有效的解決方案。
🚀 快速開始
以下代碼展示瞭如何使用transformers
庫加載ClinicalT5-base
模型:
from transformers import AutoTokenizer, T5ForConditionalGeneration
tokenizer = AutoTokenizer.from_pretrained("luqh/ClinicalT5-base")
model = T5ForConditionalGeneration.from_pretrained("luqh/ClinicalT5-base", from_flax=True)
💻 使用示例
基礎用法
from transformers import AutoTokenizer, T5ForConditionalGeneration
tokenizer = AutoTokenizer.from_pretrained("luqh/ClinicalT5-base")
model = T5ForConditionalGeneration.from_pretrained("luqh/ClinicalT5-base", from_flax=True)
📚 詳細文檔
如果您發現該資源有用,請考慮引用我們的工作:ClinicalT5: A Generative Language Model for Clinical Text
@inproceedings{lu-etal-2022-clinicalt5,
title = "{C}linical{T}5: A Generative Language Model for Clinical Text",
author = "Lu, Qiuhao and
Dou, Dejing and
Nguyen, Thien",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2022",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.findings-emnlp.398",
pages = "5436--5443",
abstract = "In the past few years, large pre-trained language models (PLMs) have been widely adopted in different areas and have made fundamental improvements over a variety of downstream tasks in natural language processing (NLP). Meanwhile, domain-specific variants of PLMs are being proposed to address the needs of domains that demonstrate a specific pattern of writing and vocabulary, e.g., BioBERT for the biomedical domain and ClinicalBERT for the clinical domain. Recently, generative language models like BART and T5 are gaining popularity with their competitive performance on text generation as well as on tasks cast as generative problems. However, in the clinical domain, such domain-specific generative variants are still underexplored. To address this need, our work introduces a T5-based text-to-text transformer model pre-trained on clinical text, i.e., ClinicalT5. We evaluate the proposed model both intrinsically and extrinsically over a diverse set of tasks across multiple datasets, and show that ClinicalT5 dramatically outperforms T5 in the domain-specific tasks and compares favorably with its close baselines.",
}