🚀 DISC-MedLLM
This repository hosts DISC-MedLLM, a medical domain-specific LLM crafted by the Fudan-DISC lab for conversational healthcare scenarios. It uses Baichuan-13b-base as the base model, offering high - quality health support services such as medical consultations and treatment inquiries.
Note: As the project is under continuous development, the model weights in this repository might differ from those in our currently deployed demo. For more details, check DISC-MedLLM.
Demo | Tech Report
🚀 Quick Start
This is the repository of DISC-MedLLM. You can check this link to try our online demo.
✨ Features
- Knowledge-intensive and reliable: Effectively bridges the gap between general language models and real - world medical consultations.
- Ability of multi - turn inquiry: Capable of handling multiple rounds of medical inquiries.
- Alignment with human preferences: Aligned with human preferences based on real - world doctor - patient dialogues.
📦 Installation
The current version of DISC-MedLLM is derived from the Baichuan-13B-Base. You can directly download our model weights from the HuggingFace repository, or automatically obtain them through the demo code.
Using through hugging face transformers
>>> import torch
>>> from transformers import AutoModelForCausalLM, AutoTokenizer
>>> from transformers.generation.utils import GenerationConfig
>>> tokenizer = AutoTokenizer.from_pretrained("Flmc/DISC-MedLLM", use_fast=False, trust_remote_code=True)
>>> model = AutoModelForCausalLM.from_pretrained("Flmc/DISC-MedLLM", device_map="auto", torch_dtype=torch.float16, trust_remote_code=True)
>>> model.generation_config = GenerationConfig.from_pretrained("Flmc/DISC-MedLLM")
>>> messages = []
>>> messages.append({"role": "user", "content": "我感觉自己颈椎非常不舒服,每天睡醒都会头痛"})
>>> response = model.chat(tokenizer, messages)
>>> print(response)
Additionally, since the current version uses Baichuan as the base model, you can refer to its repo for deploying with int8, int4 quantized inference. However, using quantized deployment will result in performance degradation.
💻 Usage Examples
Basic Usage
>>> import torch
>>> from transformers import AutoModelForCausalLM, AutoTokenizer
>>> from transformers.generation.utils import GenerationConfig
>>> tokenizer = AutoTokenizer.from_pretrained("Flmc/DISC-MedLLM", use_fast=False, trust_remote_code=True)
>>> model = AutoModelForCausalLM.from_pretrained("Flmc/DISC-MedLLM", device_map="auto", torch_dtype=torch.float16, trust_remote_code=True)
>>> model.generation_config = GenerationConfig.from_pretrained("Flmc/DISC-MedLLM")
>>> messages = []
>>> messages.append({"role": "user", "content": "我感觉自己颈椎非常不舒服,每天睡醒都会头痛"})
>>> response = model.chat(tokenizer, messages)
>>> print(response)
📚 Documentation
Overview
The DISC-MedLLM is a large - scale domain - specific model designed for conversational healthcare scenarios. It can address a variety of your needs, including medical consultations and treatment inquiries, offering you high - quality health support services. The experimental results show that it effectively bridges the gap between general language models and real - world medical consultations.
Dataset
To train DISC-MedLLM, we construct a high - quality dataset called DISC-Med-SFT consisting of over 470k distinct examples derived from existing medical datasets. We adopt a goal - oriented strategy by selectively reconstructing the dataset using a few deliberately chosen sources. These data sources serve the purpose of assisting LLMs in acquiring medical domain knowledge, aligning behavioral patterns with human preferences, and capturing real - world online medical dialogue distributions.
Dataset |
Original Source |
Size |
Re - constructed AI Doctor - Patient Dialogue |
MedDialog |
400k |
Re - constructed AI Doctor - Patient Dialogue |
cMedQA2 |
20k |
Knowledge Graph QA pairs |
CMeKG |
50k |
Behavior Preference Dataset |
Manual selection |
2k |
Others |
MedMCQA |
8k |
Others |
MOSS - SFT |
33k |
Others |
Alpaca - GPT4 - zh |
1k |
Training
You can fine - tuning our model using the data same as our data schema. Our train code is derived from Firefly with the different data schema and dialogue format. We just provide the code of Full Params Fine - tuning:
deepspeed --num_gpus={num_gpus} ./train/train.py --train_args_file ./train/train_args/sft.json
Please check the setup of sft.json
before you attempt to start training.
If you want to fine - tuning our model with other training code, please use the following dialogue format.
<\b><$user_token>content<$assistant_token>content<\s><$user_token>content ...
The user_token
and assistant_token
we used are 195
and 196
, respectively. Which is same as Baichuan - 13b - Chat.
Declaration
Due to the inherent limitations of language models, we cannot assure the accuracy or reliability of information generated by this model. This model is designed exclusively for research and testing by individuals and academic groups. We urge users to critically assess any information or medical advice obtained through the model's output. Blindly trusting or following such information is strongly discouraged. We disclaim responsibility for any issues, risks, or adverse consequences resulting from the model's use.
📄 License
The use of the source code in this repository complies with the Apache 2.0 License.
Citation
@misc{bao2023discmedllm,
title={DISC-MedLLM: Bridging General Large Language Models and Real-World Medical Consultation},
author={Zhijie Bao and Wei Chen and Shengze Xiao and Kuang Ren and Jiaao Wu and Cheng Zhong and Jiajie Peng and Xuanjing Huang and Zhongyu Wei},
year={2023},
eprint={2308.14346},
archivePrefix={arXiv},
primaryClass={cs.CL}
}