Model Overview
Model Features
Model Capabilities
Use Cases
🚀 Llama JoyCaption Beta One HF LLava GGUF Models
Llama JoyCaption Beta One is an image captioning Visual Language Model (VLM). It's free, open, and uncensored, suitable for training Diffusion models.
🚀 Quick Start
For more details, please refer to the Github.
Here is an example of how to use the model:
import torch
from PIL import Image
from transformers import AutoProcessor, LlavaForConditionalGeneration
IMAGE_PATH = "image.jpg"
PROMPT = "Write a long descriptive caption for this image in a formal tone."
MODEL_NAME = "fancyfeast/llama-joycaption-beta-one-hf-llava"
# Load JoyCaption
# bfloat16 is the native dtype of the LLM used in JoyCaption (Llama 3.1)
# device_map=0 loads the model into the first GPU
processor = AutoProcessor.from_pretrained(MODEL_NAME)
llava_model = LlavaForConditionalGeneration.from_pretrained(MODEL_NAME, torch_dtype="bfloat16", device_map=0)
llava_model.eval()
with torch.no_grad():
# Load image
image = Image.open(IMAGE_PATH)
# Build the conversation
convo = [
{
"role": "system",
"content": "You are a helpful image captioner.",
},
{
"role": "user",
"content": PROMPT,
},
]
# Format the conversation
# WARNING: HF's handling of chat's on Llava models is very fragile. This specific combination of processor.apply_chat_template(), and processor() works
# but if using other combinations always inspect the final input_ids to ensure they are correct. Often times you will end up with multiple <bos> tokens
# if not careful, which can make the model perform poorly.
convo_string = processor.apply_chat_template(convo, tokenize = False, add_generation_prompt = True)
assert isinstance(convo_string, str)
# Process the inputs
inputs = processor(text=[convo_string], images=[image], return_tensors="pt").to('cuda')
inputs['pixel_values'] = inputs['pixel_values'].to(torch.bfloat16)
# Generate the captions
generate_ids = llava_model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
suppress_tokens=None,
use_cache=True,
temperature=0.6,
top_k=None,
top_p=0.9,
)[0]
# Trim off the prompt
generate_ids = generate_ids[inputs['input_ids'].shape[1]:]
# Decode the caption
caption = processor.tokenizer.decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)
caption = caption.strip()
print(caption)
✨ Features
- Free and Open: Always released for free, with open weights and no restrictions. It comes with training scripts and detailed information on its construction, similar to bigASP.
- Uncensored: Covers both SFW and NSFW concepts equally.
- Diversity: Suitable for various image styles, content, ethnicities, genders, and orientations.
- Minimal Filtering: Trained on a large number of images to understand different aspects of the world, while strictly excluding illegal content.
📦 Installation
No specific installation steps are provided in the original document.
💻 Usage Examples
Basic Usage
The above code example demonstrates the basic usage of the model for image captioning.
Advanced Usage
vLLM provides high - performance inference for JoyCaption and an OpenAI compatible API.
vllm serve fancyfeast/llama-joycaption-beta-one-hf-llava --max-model-len 4096 --enable-prefix-caching
Note that VLMs on vLLM can be finicky, and vLLM is memory - hungry. You may need to adjust settings such as forcing eager mode, adjusting max - model - len, and gpu_memory_utilization for your specific environment.
📚 Documentation
Model Generation Details
This model was generated using llama.cpp at commit 5787b5da
.
Quantization beyond the IMatrix
Testing a new quantization method using rules to elevate important layers above what the standard imatrix would use. The standard IMatrix performs poorly at low - bit quantization and for MOE models. So, llama.cpp --tensor - type is used to boost selected layers. See [Layer bumping with llama.cpp](https://github.com/Mungert69/GGUFModelBuilder/blob/main/model - converter/tensor_list_builder.py). This method creates larger model files but increases precision for a given model size.
Choosing the Right Model Format
Selecting the correct model format depends on your hardware capabilities and memory constraints.
BF16 (Brain Float 16) – Use if BF16 acceleration is available
- A 16 - bit floating - point format for faster computation with good precision.
- Has a similar dynamic range as FP32 but uses less memory.
- Recommended for hardware with BF16 acceleration (check device specs).
- Ideal for high - performance inference with reduced memory compared to FP32.
Use BF16 if:
- Your hardware has native BF16 support (e.g., newer GPUs, TPUs).
- You want higher precision while saving memory.
- You plan to requantize the model into another format.
Avoid BF16 if:
- Your hardware does not support BF16 (it may fall back to FP32 and run slower).
- You need compatibility with older devices lacking BF16 optimization.
F16 (Float 16) – More widely supported than BF16
- A 16 - bit floating - point format with high precision but a smaller range of values than BF16.
- Works on most devices with FP16 acceleration support (including many GPUs and some CPUs).
- Slightly lower numerical precision than BF16 but generally sufficient for inference.
Use F16 if:
- Your hardware supports FP16 but not BF16.
- You need a balance between speed, memory usage, and accuracy.
- You are running on a GPU or another device optimized for FP16 computations.
Avoid F16 if:
- Your device lacks native FP16 support (it may run slower than expected).
- You have memory limitations.
Hybrid Precision Models (e.g., bf16_q8_0
, f16_q4_K
) – Best of Both Worlds
These formats selectively quantize non - essential layers while keeping key layers in full precision (e.g., attention and output layers).
- Named like
bf16_q8_0
(meaning full - precision BF16 core layers + quantized Q8_0 other layers). - Strike a balance between memory efficiency and accuracy, better than fully quantized models without requiring the full memory of BF16/F16.
Use Hybrid Models if:
- You need better accuracy than quant - only models but can't afford full BF16/F16 everywhere.
- Your device supports mixed - precision inference.
- You want to optimize trade - offs for production - grade models on constrained hardware.
Avoid Hybrid Models if:
- Your target device doesn't support mixed or full - precision acceleration.
- You are operating under ultra - strict memory limits (in which case use fully quantized formats).
Quantized Models (Q4_K, Q6_K, Q8, etc.) – For CPU & Low - VRAM Inference
Quantization reduces model size and memory usage while maintaining as much accuracy as possible.
- Lower - bit models (Q4_K) – Best for minimal memory usage, may have lower precision.
- Higher - bit models (Q6_K, Q8_0) – Better accuracy, requires more memory.
Use Quantized Models if:
- You are running inference on a CPU and need an optimized model.
- Your device has low VRAM and cannot load full - precision models.
- You want to reduce memory footprint while keeping reasonable accuracy.
Avoid Quantized Models if:
- You need maximum accuracy (full - precision models are better for this).
- Your hardware has enough VRAM for higher - precision formats (BF16/F16).
Very Low - Bit Quantization (IQ3_XS, IQ3_S, IQ3_M, Q4_K, Q4_0)
These models are optimized for very high memory efficiency, suitable for low - power devices or large - scale deployments with strict memory constraints.
- IQ3_XS: Ultra - low - bit quantization (3 - bit) with very high memory efficiency.
- Use case: Best for ultra - low - memory devices where even Q4_K is too large.
- Trade - off: Lower accuracy compared to higher - bit quantizations.
- IQ3_S: Small block size for maximum memory efficiency.
- Use case: Best for low - memory devices where IQ3_XS is too aggressive.
- IQ3_M: Medium block size for better accuracy than IQ3_S.
- Use case: Suitable for low - memory devices where IQ3_S is too limiting.
- Q4_K: 4 - bit quantization with block - wise optimization for better accuracy.
- Use case: Best for low - memory devices where Q6_K is too large.
- Q4_0: Pure 4 - bit quantization, optimized for ARM devices.
- Use case: Best for ARM - based devices or low - memory environments.
Ultra Low - Bit Quantization (IQ1_S IQ1_M IQ2_S IQ2_M IQ2_XS IQ2_XSS)
- Ultra - low - bit quantization (1 - 2 - bit) with extreme memory efficiency.
- Use case: Best for cases where you need to fit the model into very limited memory.
- Trade - off: Very low accuracy. May not function as expected. Test fully before using.
Summary Table: Model Format Selection
Property | Details |
---|---|
Model Type | The Llama JoyCaption Beta One model is an image captioning Visual Language Model (VLM) based on certain base models like meta-llama/Llama-3.1-8B-Instruct and google/siglip2-so400m-patch14-384 . |
Training Data | Not provided in the original document. |
Model Format | Precision | Memory Usage | Device Requirements | Best Use Case |
---|---|---|---|---|
BF16 | Very High | High | BF16 - supported GPU/CPU | High - speed inference with reduced memory |
F16 | High | High | FP16 - supported GPU/CPU | Inference when BF16 isn't available |
Q4_K | Medium - Low | Low | CPU or Low - VRAM devices | Memory - constrained inference |
Q6_K | Medium | Moderate | CPU with more memory | Better accuracy with quantization |
Q8_0 | High | Moderate | GPU/CPU with moderate VRAM | Highest accuracy among quantized models |
IQ3_XS | Low | Very Low | Ultra - low - memory devices | Max memory efficiency, low accuracy |
IQ3_S | Low | Very Low | Low - memory devices | Slightly more usable than IQ3_XS |
IQ3_M | Low - Medium | Low | Low - memory devices | Better accuracy than IQ3_S |
Q4_0 | Low | Low | ARM - based/embedded devices | Llama.cpp automatically optimizes for ARM inference |
Ultra Low - Bit (IQ1/2_*) | Very Low | Extremely Low | Tiny edge/embedded devices | Fit models in extremely tight memory; low accuracy |
Hybrid (e.g., bf16_q8_0 ) |
Medium–High | Medium | Mixed - precision capable hardware | Balanced performance and memory, near - FP accuracy in critical layers |
🔧 Technical Details
The model is built on certain base models:
meta-llama/Llama-3.1-8B-Instruct
google/siglip2-so400m-patch14-384
It uses transformers
library and has the image - text - to - text
pipeline tag with captioning
as one of the tags.
📄 License
No license information is provided in the original document.
Additional Testing Information
Test the Free Network Monitor Assistant
Help test the AI - Powered Free Network Monitor Assistant with quantum - ready security checks: Free Network Monitor.
The full Open Source Code for the Free Network Monitor Service is available at Source Code Free Network Monitor. You can also find the code for model quantization at GGUFModelBuilder.
How to test
Choose an AI assistant type:
TurboLLM
(GPT - 4.1 - mini)HugLLM
(Hugginface Open - source models)TestLLM
(Experimental CPU - only)
What is being tested
Pushing the limits of small open - source models for AI network monitoring, specifically:
- Function calling against live network services
- How small a model can be while handling:
- Automated Nmap security scans
- Quantum - readiness checks
- Network Monitoring tasks
TestLLM – Current experimental model (llama.cpp on 2 CPU threads on huggingface docker space)
- Zero - configuration setup
- 30s load time (slow inference but no API costs). No token limit as the cost is low.
- Help wanted! If you're into edge - device AI, let's collaborate!
Other Assistants
- TurboLLM – Uses gpt - 4.1 - mini:
- Performs well but OpenAI charges per token, so token usage is limited.
- Can create custom cmd processors to run .net code on Free Network Monitor Agents.
- Provides real - time network diagnostics and monitoring, security audits, and penetration testing (Nmap/Metasploit).
- HugLLM – Latest Open - source models:
- Runs on Hugging Face Inference API and performs well using the latest models hosted on Novita.
Example commands you could test
"Give me info on my websites SSL certificate"
"Check if my server is using quantum safe encyption for communication"
"Run a comprehensive security audit on my server"
"Create a cmd processor to .. (what ever you want)"
Note you need to install a Free Network Monitor Agent to run the .net code from. This is a very flexible and powerful feature. Use with caution!
Final Word
The author funds the servers for creating model files, running the Free Network Monitor service, and paying for inference from Novita and OpenAI. All the code behind the model creation and the Free Network Monitor project is open source. If you find the models useful, consider buying the author a coffee. The author is also open to job opportunities or sponsorship.






