đ BioMistral: A Collection of Open-Source Pretrained Large Language Models for Medical Domains
BioMistral is a suite of open - source large language models tailored for the medical domain. It uses Mistral as the foundation model, pre - trained on PubMed Central. It outperforms existing open - source medical models and shows competitiveness against proprietary ones. The project also conducts the first large - scale multilingual evaluation of LLMs in the medical domain.
đ Quick Start
You can use BioMistral with Hugging Face's Transformers library as follows.
Basic Usage
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("BioMistral/BioMistral-7B")
model = AutoModel.from_pretrained("BioMistral/BioMistral-7B")
⨠Features
- High - performance in medical tasks: BioMistral demonstrates superior performance compared to existing open - source medical models and is competitive against proprietary counterparts in a benchmark of 10 established medical question - answering tasks in English.
- Multilingual evaluation: The project conducts the first large - scale multilingual evaluation of LLMs in the medical domain by automatically translating and evaluating the benchmark into 7 other languages.
- Quantized models: Different quantization methods are applied to BioMistral models, providing options with different VRAM requirements and performance trade - offs.
đĻ Model Information
BioMistral Models
Property |
Details |
Model Type |
Further pre - trained models based on Mistral, suitable for medical domains |
Training Data |
Textual data from PubMed Central Open Access (CC0, CC BY, CC BY - SA, and CC BY - ND) |
Training Platform |
CNRS (French National Centre for Scientific Research) [Jean Zay](http://www.idris.fr/jean - zay/) French HPC |
Model Name |
Base Model |
Model Type |
Sequence Length |
Download |
BioMistral - 7B |
[Mistral - 7B - Instruct - v0.1](https://huggingface.co/mistralai/Mistral - 7B - Instruct - v0.1) |
Further Pre - trained |
2048 |
[HuggingFace](https://huggingface.co/BioMistral/BioMistral - 7B) |
BioMistral - 7B - DARE |
[Mistral - 7B - Instruct - v0.1](https://huggingface.co/mistralai/Mistral - 7B - Instruct - v0.1) |
Merge DARE |
2048 |
[HuggingFace](https://huggingface.co/BioMistral/BioMistral - 7B - DARE) |
BioMistral - 7B - TIES |
[Mistral - 7B - Instruct - v0.1](https://huggingface.co/mistralai/Mistral - 7B - Instruct - v0.1) |
Merge TIES |
2048 |
[HuggingFace](https://huggingface.co/BioMistral/BioMistral - 7B - TIES) |
BioMistral - 7B - SLERP |
[Mistral - 7B - Instruct - v0.1](https://huggingface.co/mistralai/Mistral - 7B - Instruct - v0.1) |
Merge SLERP |
2048 |
[HuggingFace](https://huggingface.co/BioMistral/BioMistral - 7B - SLERP) |
Quantized Models
Base Model |
Method |
q_group_size |
w_bit |
version |
VRAM GB |
Time |
Download |
BioMistral - 7B |
FP16/BF16 |
|
|
|
15.02 |
x1.00 |
[HuggingFace](https://huggingface.co/BioMistral/BioMistral - 7B) |
BioMistral - 7B |
AWQ |
128 |
4 |
GEMM |
4.68 |
x1.41 |
[HuggingFace](https://huggingface.co/BioMistral/BioMistral - 7B - AWQ - QGS128 - W4 - GEMM) |
BioMistral - 7B |
AWQ |
128 |
4 |
GEMV |
4.68 |
x10.30 |
[HuggingFace](https://huggingface.co/BioMistral/BioMistral - 7B - AWQ - QGS128 - W4 - GEMV) |
BioMistral - 7B |
BnB.4 |
|
4 |
|
5.03 |
x3.25 |
HuggingFace |
BioMistral - 7B |
BnB.8 |
|
8 |
|
8.04 |
x4.34 |
HuggingFace |
BioMistral - 7B - DARE |
AWQ |
128 |
4 |
GEMM |
4.68 |
x1.41 |
[HuggingFace](https://huggingface.co/BioMistral/BioMistral - 7B - DARE - AWQ - QGS128 - W4 - GEMM) |
BioMistral - 7B - TIES |
AWQ |
128 |
4 |
GEMM |
4.68 |
x1.41 |
[HuggingFace](https://huggingface.co/BioMistral/BioMistral - 7B - TIES - AWQ - QGS128 - W4 - GEMM) |
BioMistral - 7B - SLERP |
AWQ |
128 |
4 |
GEMM |
4.68 |
x1.41 |
[HuggingFace](https://huggingface.co/BioMistral/BioMistral - 7B - SLERP - AWQ - QGS128 - W4 - GEMM) |
đ Documentation
Supervised Fine - tuning Benchmark
|
Clinical KG |
Medical Genetics |
Anatomy |
Pro Medicine |
College Biology |
College Medicine |
MedQA |
MedQA 5 opts |
PubMedQA |
MedMCQA |
Avg. |
BioMistral 7B |
59.9 |
64.0 |
56.5 |
60.4 |
59.0 |
54.7 |
50.6 |
42.8 |
77.5 |
48.1 |
57.3 |
Mistral 7B Instruct |
62.9 |
57.0 |
55.6 |
59.4 |
62.5 |
57.2 |
42.0 |
40.9 |
75.7 |
46.1 |
55.9 |
|
|
|
|
|
|
|
|
|
|
|
|
BioMistral 7B Ensemble |
62.8 |
62.7 |
57.5 |
63.5 |
64.3 |
55.7 |
50.6 |
43.6 |
77.5 |
48.8 |
58.7 |
BioMistral 7B DARE |
62.3 |
67.0 |
55.8 |
61.4 |
66.9 |
58.0 |
51.1 |
45.2 |
77.7 |
48.7 |
59.4 |
BioMistral 7B TIES |
60.1 |
65.0 |
58.5 |
60.5 |
60.4 |
56.5 |
49.5 |
43.2 |
77.5 |
48.1 |
57.9 |
BioMistral 7B SLERP |
62.5 |
64.7 |
55.8 |
62.7 |
64.8 |
56.3 |
50.8 |
44.3 |
77.8 |
48.6 |
58.8 |
|
|
|
|
|
|
|
|
|
|
|
|
MedAlpaca 7B |
53.1 |
58.0 |
54.1 |
58.8 |
58.1 |
48.6 |
40.1 |
33.7 |
73.6 |
37.0 |
51.5 |
PMC - LLaMA 7B |
24.5 |
27.7 |
35.3 |
17.4 |
30.3 |
23.3 |
25.5 |
20.2 |
72.9 |
26.6 |
30.4 |
MediTron - 7B |
41.6 |
50.3 |
46.4 |
27.9 |
44.4 |
30.8 |
41.6 |
28.1 |
74.9 |
41.3 |
42.7 |
BioMedGPT - LM - 7B |
51.4 |
52.0 |
49.4 |
53.3 |
50.7 |
49.1 |
42.5 |
33.9 |
76.8 |
37.6 |
49.7 |
|
|
|
|
|
|
|
|
|
|
|
|
GPT - 3.5 Turbo 1106* |
74.71 |
74.00 |
65.92 |
72.79 |
72.91 |
64.73 |
57.71 |
50.82 |
72.66 |
53.79 |
66.0 |
Supervised Fine - Tuning (SFT) performance of BioMistral 7B models compared to baselines, measured by accuracy (â) and averaged across 3 random seeds of 3 - shot. DARE, TIES, and SLERP are model merging strategies that combine BioMistral 7B and Mistral 7B Instruct. Best model in bold, and second - best underlined. *GPT - 3.5 Turbo performances are reported from the 3 - shot results without SFT.
đ License
The project is released under the Apache - 2.0 license.
â Important Notes
â ī¸ Important Note
Although BioMistral is intended to encapsulate medical knowledge sourced from high - quality evidence, it hasn't been tailored to effectively, safely, or suitably convey this knowledge within professional parameters for action. We advise refraining from utilizing BioMistral in medical contexts unless it undergoes thorough alignment with specific use cases and undergoes further testing, notably including randomized controlled trials in real - world medical environments. BioMistral 7B may possess inherent risks and biases that have not yet been thoroughly assessed. Additionally, the model's performance has not been evaluated in real - world clinical settings. Consequently, we recommend using BioMistral 7B strictly as a research tool and advise against deploying it in production environments for natural language generation or any professional health and medical purposes.
đ Citation
Arxiv : https://arxiv.org/abs/2402.10373
@misc{labrak2024biomistral,
title={BioMistral: A Collection of Open-Source Pretrained Large Language Models for Medical Domains},
author={Yanis Labrak and Adrien Bazoge and Emmanuel Morin and Pierre-Antoine Gourraud and Mickael Rouvier and Richard Dufour},
year={2024},
eprint={2402.10373},
archivePrefix={arXiv},
primaryClass={cs.CL}
}