Model Overview
Model Features
Model Capabilities
Use Cases
đ medgemma-27b-text-it GGUF Models
This project provides a series of GGUF models based on MedGemma 27B, optimized for medical text processing. These models offer various quantization options to balance memory usage and accuracy, suitable for different hardware environments.
đ Quick Start
To quickly start running the model locally on GPU, first install the Transformers library. Gemma 3 is supported starting from transformers 4.50.0.
$ pip install -U transformers
Run model with the pipeline
API
from transformers import pipeline
import torch
pipe = pipeline(
"text-generation",
model="google/medgemma-27b-text-it",
torch_dtype=torch.bfloat16,
device="cuda",
)
messages = [
{
"role": "system",
"content": "You are a helpful medical assistant."
},
{
"role": "user",
"content": "How do you differentiate bacterial from viral pneumonia?"
}
]
output = pipe(text=messages, max_new_tokens=200)
print(output[0]["generated_text"][-1]["content"])
Run the model directly
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "google/medgemma-27b-text-it"
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
tokenizer = AutoTokenizer.from_pretrained(model_id)
messages = [
{
"role": "system",
"content": "You are a helpful medical assistant."
},
{
"role": "user",
"content": "How do you differentiate bacterial from viral pneumonia?"
}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
input_len = inputs["input_ids"].shape[-1]
with torch.inference_mode():
generation = model.generate(**inputs, max_new_tokens=200, do_sample=False)
generation = generation[0][input_len:]
decoded = tokenizer.decode(generation, skip_special_tokens=True)
print(decoded)
⨠Features
- Ultra-Low-Bit Quantization: Introduces precision-adaptive quantization for ultra-low-bit models (1 - 2 bit), with proven improvements on Llama-3-8B.
- Multiple Model Formats: Offers various model formats (BF16, F16, Quantized Models, etc.) to meet different hardware and memory requirements.
- Optimized for Medical Text: Trained on medical text, suitable for building healthcare-based AI applications.
đĻ Installation
Install the Transformers library:
$ pip install -U transformers
đģ Usage Examples
Basic Usage
The above quick start code snippets show the basic usage of running the model locally on GPU.
Advanced Usage
If you want to use the model at scale, we recommend that you create a production version using Model Garden.
đ Documentation
Model Generation Details
This model was generated using llama.cpp at commit f5cd27b7
.
Ultra-Low-Bit Quantization with IQ-DynamicGate (1 - 2 bit)
Our latest quantization method introduces precision-adaptive quantization for ultra-low-bit models (1 - 2 bit), with benchmark-proven improvements on Llama-3-8B. This approach uses layer-specific strategies to preserve accuracy while maintaining extreme memory efficiency.
Benchmark Context
All tests conducted on Llama-3-8B-Instruct using:
- Standard perplexity evaluation pipeline
- 2048-token context window
- Same prompt set across all quantizations
Method
- Dynamic Precision Allocation:
- First/Last 25% of layers â IQ4_XS (selected layers)
- Middle 50% â IQ2_XXS/IQ3_S (increase efficiency)
- Critical Component Protection:
- Embeddings/output layers use Q5_K
- Reduces error propagation by 38% vs standard 1 - 2bit
Quantization Performance Comparison (Llama-3-8B)
Quantization | Standard PPL | DynamicGate PPL | Î PPL | Std Size | DG Size | Î Size | Std Speed | DG Speed |
---|---|---|---|---|---|---|---|---|
IQ2_XXS | 11.30 | 9.84 | -12.9% | 2.5G | 2.6G | +0.1G | 234s | 246s |
IQ2_XS | 11.72 | 11.63 | -0.8% | 2.7G | 2.8G | +0.1G | 242s | 246s |
IQ2_S | 14.31 | 9.02 | -36.9% | 2.7G | 2.9G | +0.2G | 238s | 244s |
IQ1_M | 27.46 | 15.41 | -43.9% | 2.2G | 2.5G | +0.3G | 206s | 212s |
IQ1_S | 53.07 | 32.00 | -39.7% | 2.1G | 2.4G | +0.3G | 184s | 209s |
Key:
- PPL = Perplexity (lower is better)
- Î PPL = Percentage change from standard to DynamicGate
- Speed = Inference time (CPU avx2, 2048 token context)
- Size differences reflect mixed quantization overhead
Key Improvements:
- IQ1_M shows massive 43.9% perplexity reduction (27.46 â 15.41)
- IQ2_S cuts perplexity by 36.9% while adding only 0.2GB
- IQ1_S maintains 39.7% better accuracy despite 1-bit quantization
Tradeoffs:
- All variants have modest size increases (0.1 - 0.3GB)
- Inference speeds remain comparable (<5% difference)
When to Use These Models
- Fitting models into GPU VRAM
- Memory-constrained deployments
- Cpu and Edge Devices where 1 - 2bit errors can be tolerated
- Research into ultra-low-bit quantization
Choosing the Right Model Format
Selecting the correct model format depends on your hardware capabilities and memory constraints.
BF16 (Brain Float 16) â Use if BF16 acceleration is available
- A 16-bit floating-point format designed for faster computation while retaining good precision.
- Provides similar dynamic range as FP32 but with lower memory usage.
- Recommended if your hardware supports BF16 acceleration (check your device's specs).
- Ideal for high-performance inference with reduced memory footprint compared to FP32.
Use BF16 if:
- Your hardware has native BF16 support (e.g., newer GPUs, TPUs).
- You want higher precision while saving memory.
- You plan to requantize the model into another format.
Avoid BF16 if:
- Your hardware does not support BF16 (it may fall back to FP32 and run slower).
- You need compatibility with older devices that lack BF16 optimization.
F16 (Float 16) â More widely supported than BF16
- A 16-bit floating-point high precision but with less of range of values than BF16.
- Works on most devices with FP16 acceleration support (including many GPUs and some CPUs).
- Slightly lower numerical precision than BF16 but generally sufficient for inference.
Use F16 if:
- Your hardware supports FP16 but not BF16.
- You need a balance between speed, memory usage, and accuracy.
- You are running on a GPU or another device optimized for FP16 computations.
Avoid F16 if:
- Your device lacks native FP16 support (it may run slower than expected).
- You have memory limitations.
Quantized Models (Q4_K, Q6_K, Q8, etc.) â For CPU & Low-VRAM Inference
Quantization reduces model size and memory usage while maintaining as much accuracy as possible.
- Lower-bit models (Q4_K) â Best for minimal memory usage, may have lower precision.
- Higher-bit models (Q6_K, Q8_0) â Better accuracy, requires more memory.
Use Quantized Models if:
- You are running inference on a CPU and need an optimized model.
- Your device has low VRAM and cannot load full-precision models.
- You want to reduce memory footprint while keeping reasonable accuracy.
Avoid Quantized Models if:
- You need maximum accuracy (full-precision models are better for this).
- Your hardware has enough VRAM for higher-precision formats (BF16/F16).
Very Low-Bit Quantization (IQ3_XS, IQ3_S, IQ3_M, Q4_K, Q4_0)
These models are optimized for extreme memory efficiency, making them ideal for low-power devices or large-scale deployments where memory is a critical constraint.
-
IQ3_XS: Ultra-low-bit quantization (3-bit) with extreme memory efficiency.
- Use case: Best for ultra-low-memory devices where even Q4_K is too large.
- Trade-off: Lower accuracy compared to higher-bit quantizations.
-
IQ3_S: Small block size for maximum memory efficiency.
- Use case: Best for low-memory devices where IQ3_XS is too aggressive.
-
IQ3_M: Medium block size for better accuracy than IQ3_S.
- Use case: Suitable for low-memory devices where IQ3_S is too limiting.
-
Q4_K: 4-bit quantization with block-wise optimization for better accuracy.
- Use case: Best for low-memory devices where Q6_K is too large.
-
Q4_0: Pure 4-bit quantization, optimized for ARM devices.
- Use case: Best for low-memory environments.
Summary Table: Model Format Selection
Model Format | Precision | Memory Usage | Device Requirements | Best Use Case |
---|---|---|---|---|
BF16 | Highest | High | BF16-supported GPU/CPUs | High-speed inference with reduced memory |
F16 | High | High | FP16-supported devices | GPU inference when BF16 isn't available |
Q4_K | Medium Low | Low | CPU or Low-VRAM devices | Best for memory-constrained environments |
Q6_K | Medium | Moderate | CPU with more memory | Better accuracy while still being quantized |
Q8_0 | High | Moderate | CPU or GPU with enough VRAM | Best accuracy among quantized models |
IQ3_XS | Very Low | Very Low | Ultra-low-memory devices | Extreme memory efficiency and low accuracy |
Q4_0 | Low | Low | ARM or low-memory devices | llama.cpp can optimize for ARM devices |
Included Files & Details
medgemma-27b-text-it-bf16.gguf
: Model weights preserved in BF16. Use this if you want to requantize the model into a different format. Best if your device supports BF16 acceleration.medgemma-27b-text-it-f16.gguf
: Model weights stored in F16. Use if your device supports FP16, especially if BF16 is not available.medgemma-27b-text-it-bf16-q8_0.gguf
: Output & embeddings remain in BF16. All other layers quantized to Q8_0. Use if your device supports BF16 and you want a quantized version.medgemma-27b-text-it-f16-q8_0.gguf
: Output & embeddings remain in F16. All other layers quantized to Q8_0.medgemma-27b-text-it-q4_k.gguf
: Output & embeddings quantized to Q8_0. All other layers quantized to Q4_K. Good for CPU inference with limited memory.medgemma-27b-text-it-q4_k_s.gguf
: Smallest Q4_K variant, using less memory at the cost of accuracy. Best for very low-memory setups.medgemma-27b-text-it-q6_k.gguf
: Output & embeddings quantized to Q8_0. All other layers quantized to Q6_K.medgemma-27b-text-it-q8_0.gguf
: Fully Q8 quantized model for better accuracy. Requires more memory but offers higher precision.medgemma-27b-text-it-iq3_xs.gguf
: IQ3_XS quantization, optimized for extreme memory efficiency. Best for ultra-low-memory devices.medgemma-27b-text-it-iq3_m.gguf
: IQ3_M quantization, offering a medium block size for better accuracy. Suitable for low-memory devices.medgemma-27b-text-it-q4_0.gguf
: Pure Q4_0 quantization, optimized for ARM devices. Best for low-memory environments. Prefer IQ4_NL for better accuracy.
If you find these models useful
- Please click "Like" if you find this useful!
- Help me test my AI-Powered Network Monitor Assistant with quantum-ready security checks: Free Network Monitor
How to test: Choose an AI assistant type:
TurboLLM
(GPT-4o-mini)HugLLM
(Hugginface Open-source)TestLLM
(Experimental CPU-only)
What I'm Testing
I'm pushing the limits of small open-source models for AI network monitoring, specifically:
- Function calling against live network services
- How small can a model go while still handling:
- Automated Nmap scans
- Quantum-readiness checks
- Network Monitoring tasks
TestLLM â Current experimental model (llama.cpp on 2 CPU threads)
- Zero-configuration setup
- ⤠30s load time (slow inference but no API costs)
- Help wanted! If you're into edge-device AI, let's collaborate!
Other Assistants
- TurboLLM â Uses gpt-4o-mini for:
- Create custom cmd processors to run .net code on Free Network Monitor Agents
- Real-time network diagnostics and monitoring
- Security Audits
- Penetration testing (Nmap/Metasploit)
- Get more tokens by logging in or downloading our Free Network Monitor Agent with integrated AI Assistant
- HugLLM â Latest Open-source models:
- Runs on Hugging Face Inference API
Example commands to you could test
"Give me info on my websites SSL certificate"
"Check if my server is using quantum safe encyption for communication"
"Run a comprehensive security audit on my server"
- '"Create a cmd processor to .. (what ever you want)" Note you need to install a Free Network Monitor Agent to run the .net code from. This is a very flexible and powerful feature. Use with caution!
đ§ Technical Details
Model architecture overview
The MedGemma model is built based on Gemma 3 and uses the same decoder-only transformer architecture as Gemma 3. To read more about the architecture, consult the Gemma 3 model card.
Technical specifications
Property | Details |
---|---|
Model Type | Decoder-only Transformer architecture, see the Gemma 3 technical report |
Modalities | 4B: Text, vision; 27B: Text only |
Attention mechanism | Utilizes grouped-query attention (GQA) |
Context length | Supports long context, at least 128K tokens |
Key publication | Coming soon |
Model created | May 20, 2025 |
Model version | 1.0.0 |
Inputs and outputs
Input:
- Text string, such as a question or prompt
- Total input length of 128K tokens
Output:
- Generated text in response to the input, such as an answer to a question, analysis of image content, or a summary of a document
- Total output length of 8192 tokens
Performance and validation
MedGemma was evaluated across a range of different multimodal classification, report generation, visual question answering, and text-based tasks.
Key performance metrics
Text evaluations
MedGemma 4B and text-only MedGemma 27B were evaluated across a range of text-only benchmarks for medical knowledge and reasoning.
The MedGemma models outperform their respective base Gemma models across all tested text-only health benchmarks.
Metric | MedGemma 27B | Gemma 3 27B | MedGemma 4B | Gemma 3 4B |
---|---|---|---|---|
MedQA (4-op) | 89.8 (best-of-5) 87.7 (0-shot) | 74.9 | 64.4 | 50.7 |
MedMCQA | 74.2 | 62.6 | 55.7 | 45.4 |
PubMedQA | 76.8 | 73.4 | 73.4 | 68.4 |
MMLU Med (text only) | 87.0 | 83.3 | 70.0 | 67.2 |
MedXpertQA (text only) | 26.7 | 15.7 | 14.2 | 11.6 |
AfriMed-QA | 84.0 | 72.0 | 52.0 | 48.0 |
For all MedGemma 27B results, test-time scaling is used to improve performance.
Ethics and safety evaluation
Evaluation approach
Our evaluation methods include structured evaluations and internal red-teaming testing of relevant content policies. Red-teaming was conducted by a number of different teams.
đ License
The use of MedGemma is governed by the Health AI Developer Foundations terms of use.

