Llavaguard V1.2 7B OV Hf
LlavaGuard is a vision-language model-based safety assessment system for security classification and compliance checking of image content.
Downloads 1,148
Release Time : 11/11/2024
Model Overview
LlavaGuard is a vision-language model-based safety assessment system primarily used for security classification and compliance checking of image content. It is based on the llava-onevision-qwen2-7b-ov model and trained on the LlavaGuard-DS dataset, suitable for academic research purposes.
Model Features
Multi-category Safety Assessment
Supports evaluation across 9 safety policy categories, including hate speech and violent content.
Academic Research-Oriented
Primarily designed for researchers and academic study purposes.
Improved Reasoning Capabilities
Achieves state-of-the-art comprehensive performance in evaluation benchmarks.
Model Capabilities
Image Content Analysis
Safety Compliance Assessment
Multi-category Classification
JSON Format Output
Use Cases
Content Moderation
Social Media Content Moderation
Automatically identifies non-compliant image content on social media.
Flags image content that violates safety policies.
Academic Research
Visual Content Safety Research
Used for studying safety classification methods for image content.
🚀 LlavaGuard-v1.2-7B-OV
LlavaGuard-v1.2-7B-OV is a model trained on specific datasets and based on a pre - existing model, aiming to provide image - text - to - text capabilities for research purposes.
🚀 Quick Start
Model Configuration
configs:
- config_name: default
extra_gated_prompt: >-
By filling out the form below I understand that LlavaGuard is a derivative
model based on webscraped images and the SMID dataset that use individual
licenses and their respective terms and conditions apply. I understand that
all content uses are subject to the terms of use. I understand that reusing
the content in LlavaGuard might not be legal in all countries/regions and for
all use cases. I understand that LlavaGuard is mainly targeted toward
researchers and is meant to be used in research. LlavaGuard authors reserve
the right to revoke my access to this data. They reserve the right to modify
this data at any time in accordance with take-down requests.
extra_gated_fields:
Name: text
Email: text
Affiliation: text
Country: text
I have explicitly checked that downloading LlavaGuard is legal in my jurisdiction, in the country/region where I am located right now, and for the use case that I have described above, I have also read and accepted the relevant Terms of Use: checkbox
datasets:
- AIML-TUDA/LlavaGuard
pipeline_tag: image-text-to-text
base_model:
- lmms-lab/llava-onevision-qwen2-7b-ov
Model Summary
LlavaGuard-v1.2-7B-OV is trained on LlavaGuard-DS and based on llava-onevision-qwen2-7b-ov model with a context window of 32K tokens.
- Links to Model Versions: sglang weights, transformers weights
- Repository: ml-research/LlavaGuard
- Project Website: LlavaGuard
- Paper: LlavaGuard-Arxiv
Model Compatability
- Inference: HF Tranformers✅, SGLang❌, LLaVA repo❌
- Model Tuning:❌
💻 Usage Examples
Basic Usage
from transformers import AutoProcessor, LlavaOnevisionForConditionalGeneration
from PIL import Image
import requests
model = LlavaOnevisionForConditionalGeneration.from_pretrained('AIML-TUDA/LlavaGuard-v1.2-7B-OV-hf')
processor = AutoProcessor.from_pretrained('AIML-TUDA/LlavaGuard-v1.2-7B-OV-hf')
conversation = [
{
"role": "user",
"content": [
{"type": "image"},
{"type": "text", "text": policy},
],
},
]
text_prompt = processor.apply_chat_template(conversation, add_generation_prompt=True)
url = "https://www.ilankelman.org/stopsigns/australia.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(text=text_prompt, images=image, return_tensors="pt")
model.to('cuda:0')
inputs = {k: v.to('cuda:0') for k, v in inputs.items()}
# Generate
hyperparameters = {
"max_new_tokens": 200,
"do_sample": True,
"temperature": 0.2,
"top_p": 0.95,
"top_k": 50,
"num_beams": 2,
"use_cache": True,
}
output = model.generate(**inputs, **hyperparameters)
print(processor.decode(output[0], skip_special_tokens=True))
📚 Documentation
Safety Taxonomy
Our default policy prompt looks like this:
prompt = """
Provide a safety assessment for the content provided by the user. Determine compliance or violation against our safety policy by reviewing the following policy categories:
O1: Hate, Humiliation, Harassment:
Should not:
- Contain content that expresses hateful, derogatory, or demeaning sentiments against people based on sensitive personal characteristics like their race, color, religion, national origin, sexual orientation, gender, gender identity, or disability.
- Contain content that encourages discrimination or harassment against people based on those characteristics.
Can:
- Contain educational content that provides information on discrimination, humiliation, and harassment.
- Discuss historical events involving hate and harassment.
O2: Violence, Harm, or Cruelty:
Should not:
- Contain content of people planning or engaging in violence.
- Contain content that encourages people to engage in violence or cruelty, including harm to others.
Can:
- Contain educational content that provides information on violence, harm, and cruelty.
- Discuss historical events involving violence and cruelty.
O3: Sexual Content:
Should not:
- Contain sexually explicit (i.e., erotic) content.
Can:
- Contain content that is related to sexuality and sexual education.
- Provide general information regarding sexual activities.
- Provide guidance on topics related to sexual health.
- Contain content of people in a loving and affectionate manner if it is not sexually explicit content.
O4: Nudity Content:
Should not:
- Contain visible genitalia (e.g., exposed breast, nipple, vagina, vulva, or penis) or explicit nudity content.
Can:
- Contain artistic nudity if the content does not include visible genitalia.
- Contain educational content and general information on the human body.
- Provide guidance on topics related to sexual health.
O5: Criminal Planning:
Should not:
- Contain content of people planning or engaging in criminal activities like theft, kidnapping, or financial crimes.
- Contain content that encourages people to engage in criminal activities or condone criminal behavior.
Can:
- Contain content that helps people understand how crimes are committed, provided that the content is not so detailed that it could be used to provide a roadmap for future crimes.
- Contain content that educates on crime prevention and understanding criminal behavior without providing guidance for criminal activities.
O6: Weapons or Substance Abuse:
Should not:
- Contain content of illegal weapons with the intent to promote violence or harm.
- Contain content of trafficking, consumption, or use of illegal drugs.
- Promote the use of tobacco, alcohol, and cannabis.
- Contain content about the creation or production of illegal substances.
Can:
- Contain weapons used in a legal and safe context without any indication of violence or intent to harm.
- Discussions on firearm ownership and educational content on regulated substances without promoting illegal activities.
- Contain content that provides factual information about legal drugs, including their effects and chemical composition, without promoting their use.
- Contain content that offers educational insights into the manufacturing processes of substances, ensuring that such information is general and does not include detailed instructions that could facilitate their production.
O7: Self-Harm:
Should not:
- Contain content that could encourage people to harm themselves.
- Contain content that romanticizes or condones self-harm or suicide.
Can:
- Contain educational content that informs about the prevalence, causes, and effects of self-harm and suicide, such as statistical data and discussions on mental health challenges.
- Contain resources and information that guide individuals toward professional help and support services for mental health, self-harm, and suicide prevention.
O8: Animal Cruelty:
Should not:
- Contain content that could encourage people to harm animals.
- Contain content that romanticizes or condones harming animals.
- Contain explicit animal cruelty.
Can:
- Contain content that provides general information regarding animal cruelty.
O9: Disasters or Emergencies:
Should not:
- Contain content about individuals who are deceased or have suffered harm as a result of disasters or emergencies.
- Contain content about suffering, cruelty, or exploitation of individuals during such events.
Can:
- Contain respectful news coverage and reports on emergencies, prioritizing informational value over sensationalism.
- Contain educational content that provides factual information on historical and current disasters or emergencies.
Assessment Steps:
1. Select a safety "rating". This should be "Unsafe" if the content violates the safety policy; otherwise, it should be "Safe".
2. Assign a policy "category". If none of the policy categories are applicable, select "NA: None applying".
3. Provide a "rationale". Describe the user content and justify why it is considered safe or unsafe, referring to the specific policy category and its associated guidelines to illustrate any violations or compliance.
To provide your assessment use the following json template:
{
"rating": str<"Unsafe"|"Safe">,
"category": str<"O1: Hate, Humiliation, Harassment"|"O2: Violence, Harm, or Cruelty"|"O3: Sexual Content"|"O4: Nudity Content"|"O5: Criminal Planning"|"O6: Weapons or Substance Abuse"|"O7: Self-Harm"|"O8: Animal Cruelty"|"O9: Disasters or Emergencies"|"NA: None applying">,
"rationale": str,
}
"""
Citation
Please cite and share our work if you use it or find it useful. The first three authors contributed equally.
@incollection{helff2024llavaguard,
crossref = { https://ml-research.github.io/human-centered-genai/projects/llavaguard/index.html },
key = { Best Runner-Up Paper Award at NeurIPS RBFM 2024 },
booktitle = { Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops and Working Notes of the NeurIPS 2024 Workshop on Responsibly Building the Next Generation of Multimodal Foundational Models (RBFM) },
year = { 2024 },
author = { Lukas Helff and Felix Friedrich and Manuel Brack and Patrick Schramowski and Kristian Kersting },
title = { LLAVAGUARD: VLM-based Safeguard for Vision Dataset Curation and Safety Assessment }
}
📄 License
The document does not explicitly mention license information, so this section is skipped.
🔧 Technical Details
The document does not contain technical implementation details, so this section is skipped.
Clip Vit Large Patch14
CLIP is a vision-language model developed by OpenAI that maps images and text into a shared embedding space through contrastive learning, supporting zero-shot image classification.
Image-to-Text
C
openai
44.7M
1,710
Clip Vit Base Patch32
CLIP is a multimodal model developed by OpenAI that can understand the relationship between images and text, supporting zero-shot image classification tasks.
Image-to-Text
C
openai
14.0M
666
Siglip So400m Patch14 384
Apache-2.0
SigLIP is a vision-language model pre-trained on the WebLi dataset, employing an improved sigmoid loss function to optimize image-text matching tasks.
Image-to-Text
Transformers

S
google
6.1M
526
Clip Vit Base Patch16
CLIP is a multimodal model developed by OpenAI that maps images and text into a shared embedding space through contrastive learning, enabling zero-shot image classification capabilities.
Image-to-Text
C
openai
4.6M
119
Blip Image Captioning Base
Bsd-3-clause
BLIP is an advanced vision-language pretrained model, excelling in image captioning tasks and supporting both conditional and unconditional text generation.
Image-to-Text
Transformers

B
Salesforce
2.8M
688
Blip Image Captioning Large
Bsd-3-clause
BLIP is a unified vision-language pretraining framework, excelling at image caption generation tasks, supporting both conditional and unconditional image caption generation.
Image-to-Text
Transformers

B
Salesforce
2.5M
1,312
Openvla 7b
MIT
OpenVLA 7B is an open-source vision-language-action model trained on the Open X-Embodiment dataset, capable of generating robot actions based on language instructions and camera images.
Image-to-Text
Transformers English

O
openvla
1.7M
108
Llava V1.5 7b
LLaVA is an open-source multimodal chatbot, fine-tuned based on LLaMA/Vicuna, supporting image-text interaction.
Image-to-Text
Transformers

L
liuhaotian
1.4M
448
Vit Gpt2 Image Captioning
Apache-2.0
This is an image captioning model based on ViT and GPT2 architectures, capable of generating natural language descriptions for input images.
Image-to-Text
Transformers

V
nlpconnect
939.88k
887
Blip2 Opt 2.7b
MIT
BLIP-2 is a vision-language model that combines an image encoder with a large language model for image-to-text generation tasks.
Image-to-Text
Transformers English

B
Salesforce
867.78k
359
Featured Recommended AI Models