🚀 Medra: Your Compact Medical Reasoning Partner
Medra is a purpose - built, lightweight medical language model. It aims to assist in clinical reasoning, education, and dialogue modeling. Built on Gemma 3, it's the first step in a long - term project to create AI support systems for medicine that are deployable, interpretable, and ethically aligned.
Model Information
Property |
Details |
Model Size |
4b |
Version |
Medra v1 (Gemma Edition) |
Format |
GGUF (Q4, Q8, BF16) |
License |
Apache 2.0 |
Author |
Dr. Alexandru Lupoi |
Model Type |
Fine - tuned Gemma 3 |
Training Data |
qiaojin/PubMedQA, Mreeb/Dermatology - Question - Answer - Dataset - For - Fine - Tuning, lavita/MedQuAD |

🚀 Quick Start
Medra is a cognitive tool, a reasoning companion for students, clinicians, and researchers. It can run on consumer hardware, support nuanced medical prompts, and never pretend to replace human judgment.
✨ Features
Purpose & Philosophy
Medra was developed to address the lack of AI models optimized for structured, medically relevant reasoning that can run locally. It aims to provide interpretable outputs, support differential diagnosis, assist medical students, and refine reasoning in clinical contexts. This project believes that AI in healthcare must be transparent, educational, and augmentative.
Key Capabilities
- Lightweight Clinical Reasoning Core: Fine - tuned to support structured medical queries, diagnostic steps, SOAP formatting, and clinical questioning strategies.
- Local and Mobile Friendly: Can run on local devices via Ollama, LM Studio, KoboldCpp, and other local inference engines without an API.
- Data & Alignment: Trained on medical content such as PubMed - derived literature, reasoning datasets, clinical notes, and prompt structures based on real - world physician interactions.
- High Interpretability: Designed for transparency and reflection, not black - box decision - making.
- Designed for Ethical Integration: Built to be aligned, cautious, and useful in human - in - the - loop medical settings.
Intended Use
- Medical education and exam - style reasoning
- Case - based learning simulation
- AI health assistant prototyping
- Dialogue modeling in therapeutic or diagnostic contexts
- A tool for thinking alongside, not instead of human judgment
Limitations
- Medra is not a licensed medical professional and is not intended for real - world diagnosis, treatment planning, or patient interaction without human oversight.
- The model may hallucinate, oversimplify, or present outdated medical knowledge in edge cases.
- It does not have long - term memory, real - world clinical data access, or the authority to guide care.
- It is a prototype, not a finished replacement for expertise.
🔧 Technical Details
- Base model: Gemma 3
- Fine - tuning stages: Instructional tuning (STF); RLHF planned in upcoming release
- Data domains: Medical Q&A, differential diagnosis formats, clinical conversation datasets, PubMed - derived material
- Supported inference engines: Ollama, LM Studio, KoboldCpp, GGML - compatible platforms
- Quantization formats: Q4, Q8, BF16
📄 License
This project is licensed under the Apache 2.0 license.
The Medra Family
Medra is part of a family of medical reasoning models:
- Medra: Gemma - based compact model for lightweight local inference
- MedraQ: Qwen 3 - based, multilingual and adaptive version
- MedraOmni: Future flagship model built on Qwen 2.5 Omni with full multimodal support
Each model in the series is purpose - built, ethically scoped, and focused on responsible augmentation of healthcare knowledge.
Final Note
Medra was built to make intelligent care more accessible, transparent, and aligned with human needs.
Uploaded finetuned model
- Developed by: drwlf
- License: apache - 2.0
- Finetuned from model: unsloth/gemma - 3 - 4b - it - unsloth - bnb - 4bit
This gemma3 model was trained 2x faster with Unsloth and Huggingface's TRL library.
