đ Medical-Llama3-8B-4bit: Fine-Tuned Llama3 for Medical Q&A
This repository offers a fine - tuned version of the powerful Llama3 8B model, specifically crafted to answer medical questions informatively. It harnesses the rich knowledge from the AI Medical Chatbot dataset ([ruslanmv/ai - medical - chatbot](https://huggingface.co/datasets/ruslanmv/ai - medical - chatbot)).
đĻ Installation
This model can be accessed via the Hugging Face Transformers library. Install it using the following pip commands:
pip install git+https://github.com/huggingface/accelerate.git
pip install git+https://github.com/huggingface/transformers.git
pip install bitsandbytes
đģ Usage Examples
Basic Usage
Here's a Python code snippet demonstrating how to interact with the llama3 - 8B - medical
model and generate answers to medical questions:
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
import torch
model_id = "ruslanmv/llama3-8B-medical"
quantization_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_compute_dtype=torch.bfloat16
)
tokenizer = AutoTokenizer.from_pretrained(model_id)
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model = AutoModelForCausalLM.from_pretrained(model_id, config=quantization_config)
def create_prompt(user_query):
B_INST, E_INST = "<s>[INST]", "[/INST]"
B_SYS, E_SYS = "<<SYS>>\n", "\n<</SYS>>\n\n"
DEFAULT_SYSTEM_PROMPT = """\
You are an AI Medical Chatbot Assistant, provide comprehensive and informative responses to your inquiries.
If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information."""
SYSTEM_PROMPT = B_SYS + DEFAULT_SYSTEM_PROMPT + E_SYS
instruction = f"User asks: {user_query}\n"
prompt = B_INST + SYSTEM_PROMPT + instruction + E_INST
return prompt.strip()
def generate_text(model, tokenizer, prompt,
max_length=200,
temperature=0.8,
num_return_sequences=1):
prompt = create_prompt(user_query)
input_ids = tokenizer.encode(prompt, return_tensors="pt").to(device)
output = model.generate(
input_ids=input_ids,
max_length=max_length,
temperature=temperature,
num_return_sequences=num_return_sequences,
pad_token_id=tokenizer.eos_token_id,
do_sample=True
)
generated_text = tokenizer.decode(output[0], skip_special_tokens=True)
generated_text = generated_text.split(prompt)[-1].strip()
return generated_text
user_query = "I'm a 35-year-old male experiencing symptoms like fatigue, increased sensitivity to cold, and dry, itchy skin. Could these be indicative of hypothyroidism?"
generated_text = generate_text(model, tokenizer, user_query)
print(generated_text)
The type of answer is:
Yes, it is possible. Hypothyroidism can present symptoms like increased sensitivity to cold, dry skin, and fatigue. These symptoms are characteristic of hypothyroidism. I recommend consulting with a healthcare provider. 2. Hypothyroidism can present symptoms like fever, increased sensitivity to cold, dry skin, and fatigue. These symptoms are characteristic of hypothyroidism.
⨠Features
- Medical Focus: Optimized to handle health - related inquiries.
- Knowledge Base: Trained on a comprehensive medical chatbot dataset.
- Text Generation: Capable of generating informative and potentially helpful responses.
đ Documentation
Model & Development
- Developed by: ruslanmv
- License: Apache - 2.0
- Finetuned from model: meta - llama/Meta - Llama - 3 - 8B
đ License
This model is distributed under the Apache License 2.0 (see LICENSE file for details).
â ī¸ Important Note
This model is intended for informational purposes only and should not be used as a substitute for professional medical advice. Always consult with a qualified healthcare provider for any medical concerns.
đĄ Usage Tip
While we strive to provide informative responses, the accuracy of the model's outputs cannot be guaranteed. It is crucial to consult a doctor or other healthcare professional for definitive medical advice.
đ¤ Contributing
We welcome contributions to this repository! If you have improvements or suggestions, feel free to create a pull request.