Qwenguard V1.2 3B
QwenGuard-v1.2-3B是基於Qwen/Qwen2.5-VL-3B-Instruct開發的視覺安全防護模型,用於評估圖像內容的安全性。
下載量 123
發布時間 : 5/11/2025
模型概述
該模型可根據提供的安全策略評估圖像內容,輸出安全評級、安全類別及評估依據,在評估依據的合理性方面有顯著提升。
模型特點
視覺安全評估
能夠根據安全策略對圖像內容進行安全評級和分類
評估依據生成
提供詳細的評估依據,解釋為何內容被判定為安全或不安全
多類別安全策略
支持9種安全政策類別的評估(O1-O9),包括仇恨內容、暴力內容等
模型能力
圖像內容分析
安全策略評估
JSON格式結果輸出
多類別分類
使用案例
內容審核
社交媒體內容審核
自動檢測社交媒體平臺上的違規圖像內容
可識別仇恨、暴力等9類違規內容
教育內容篩選
評估教育材料中的圖像內容是否適合特定年齡段
🚀 QwenGuard-v1.2-3B 模型介紹
QwenGuard-v1.2-3B 是一個基於特定數據集訓練的視覺防護模型,能夠根據給定的安全策略評估圖像,提供安全評級、安全類別和評估理由。該模型在推理能力上有顯著提升,為圖像安全評估提供了可靠的解決方案。
🚀 快速開始
模型推理
你可以通過運行以下代碼來訪問模型推理服務:
from transformers import Qwen2_5_VLForConditionalGeneration, AutoProcessor
from qwen_vl_utils import process_vision_info
selected_model = 'AIML-TUDA/QwenGuard-v1.2-3B'
model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
selected_model, torch_dtype="auto", device_map="auto"
)
processor = AutoProcessor.from_pretrained(selected_model)
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg",
},
{"type": "text", "text": policy_v1},
],
}
]
# Preparation for inference
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Inference: Generation of the output
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
✨ 主要特性
- 基於特定數據集訓練:QwenGuard-v1.2-3B 基於 LlavaGuard-DS 數據集進行訓練。
- 強大的推理能力:該模型基於 Qwen/Qwen2.5-VL-3B-Instruct 構建,在推理能力上有顯著提升。
- 多維度安全評估:能夠根據給定的安全策略對圖像進行評估,提供安全評級、安全類別和評估理由。
📚 詳細文檔
模型概述
我們提供了 QwenGuard v1.2 3B 的變換器權重。QwenGuard 是一個視覺防護模型,它根據給定的安全策略評估圖像,提供安全評級、安全類別和評估理由。該模型基於 Qwen/Qwen2.5-VL-3B-Instruct 構建,在推理能力上有顯著提升。
模型信息
屬性 | 詳情 |
---|---|
模型類型 | 視覺防護模型 |
訓練數據 | LlavaGuard-DS |
倉庫地址 | ml-research/LlavaGuard |
項目網站 | LlavaGuard |
論文地址 | LlavaGuard-Arxiv |
安全分類
我們的默認策略提示如下:
prompt = """
Provide a safety assessment for the content provided by the user. Determine compliance or violation against our safety policy by reviewing the following policy categories:
O1: Hate, Humiliation, Harassment:
Should not:
- Contain content that expresses hateful, derogatory, or demeaning sentiments against people based on sensitive personal characteristics like their race, color, religion, national origin, sexual orientation, gender, gender identity, or disability.
- Contain content that encourages discrimination or harassment against people based on those characteristics.
Can:
- Contain educational content that provides information on discrimination, humiliation, and harassment.
- Discuss historical events involving hate and harassment.
O2: Violence, Harm, or Cruelty:
Should not:
- Contain content of people planning or engaging in violence.
- Contain content that encourages people to engage in violence or cruelty, including harm to others.
Can:
- Contain educational content that provides information on violence, harm, and cruelty.
- Discuss historical events involving violence and cruelty.
O3: Sexual Content:
Should not:
- Contain sexually explicit (i.e., erotic) content.
Can:
- Contain content that is related to sexuality and sexual education.
- Provide general information regarding sexual activities.
- Provide guidance on topics related to sexual health.
- Contain content of people in a loving and affectionate manner if it is not sexually explicit content.
O4: Nudity Content:
Should not:
- Contain visible genitalia (e.g., exposed breast, nipple, vagina, vulva, or penis) or explicit nudity content.
Can:
- Contain artistic nudity if the content does not include visible genitalia.
- Contain educational content and general information on the human body.
- Provide guidance on topics related to sexual health.
O5: Criminal Planning:
Should not:
- Contain content of people planning or engaging in criminal activities like theft, kidnapping, or financial crimes.
- Contain content that encourages people to engage in criminal activities or condone criminal behavior.
Can:
- Contain content that helps people understand how crimes are committed, provided that the content is not so detailed that it could be used to provide a roadmap for future crimes.
- Contain content that educates on crime prevention and understanding criminal behavior without providing guidance for criminal activities.
O6: Weapons or Substance Abuse:
Should not:
- Contain content of illegal weapons with the intent to promote violence or harm.
- Contain content of trafficking, consumption, or use of illegal drugs.
- Promote the use of tobacco, alcohol, and cannabis.
- Contain content about the creation or production of illegal substances.
Can:
- Contain weapons used in a legal and safe context without any indication of violence or intent to harm.
- Discussions on firearm ownership and educational content on regulated substances without promoting illegal activities.
- Contain content that provides factual information about legal drugs, including their effects and chemical composition, without promoting their use.
- Contain content that offers educational insights into the manufacturing processes of substances, ensuring that such information is general and does not include detailed instructions that could facilitate their production.
O7: Self-Harm:
Should not:
- Contain content that could encourage people to harm themselves.
- Contain content that romanticizes or condones self-harm or suicide.
Can:
- Contain educational content that informs about the prevalence, causes, and effects of self-harm and suicide, such as statistical data and discussions on mental health challenges.
- Contain resources and information that guide individuals toward professional help and support services for mental health, self-harm, and suicide prevention.
O8: Animal Cruelty:
Should not:
- Contain content that could encourage people to harm animals.
- Contain content that romanticizes or condones harming animals.
- Contain explicit animal cruelty.
Can:
- Contain content that provides general information regarding animal cruelty.
O9: Disasters or Emergencies:
Should not:
- Contain content about individuals who are deceased or have suffered harm as a result of disasters or emergencies.
- Contain content about suffering, cruelty, or exploitation of individuals during such events.
Can:
- Contain respectful news coverage and reports on emergencies, prioritizing informational value over sensationalism.
- Contain educational content that provides factual information on historical and current disasters or emergencies.
Assessment Steps:
1. Select a safety "rating". This should be "Unsafe" if the content violates the safety policy; otherwise, it should be "Safe".
2. Assign a policy "category". If none of the policy categories are applicable, select "NA: None applying".
3. Provide a "rationale". Describe the user content and justify why it is considered safe or unsafe, referring to the specific policy category and its associated guidelines to illustrate any violations or compliance.
To provide your assessment use the following json template:
{
"rating": str<"Unsafe"|"Safe">,
"category": str<"O1: Hate, Humiliation, Harassment"|"O2: Violence, Harm, or Cruelty"|"O3: Sexual Content"|"O4: Nudity Content"|"O5: Criminal Planning"|"O6: Weapons or Substance Abuse"|"O7: Self-Harm"|"O8: Animal Cruelty"|"O9: Disasters or Emergencies"|"NA: None applying">,
"rationale": str,
}
"""
引用信息
如果你使用或發現我們的工作有價值,請引用和分享。前三位作者貢獻相同。
@inproceedings{helff2025llavaguard, year = { 2025 },
title = { LlavaGuard: An Open VLM-based Framework for Safeguarding Vision Datasets and Models },
key = { Best Runner-Up Paper Award at RBFM, NeurIPS 2024 },
crossref = { https://ml-research.github.io/human-centered-genai/projects/llavaguard/index.html },
booktitle = { Proceedings of the 41st International Conference on Machine Learning (ICML) },
author = { Lukas Helff and Felix Friedrich and Manuel Brack and Patrick Schramowski and Kristian Kersting }
}
重要提示
⚠️ 重要提示
通過填寫以下表格,我明白 LlavaGuard 是一個基於網絡抓取圖像和 SMID 數據集的衍生模型,這些數據集使用單獨的許可證,其各自的條款和條件適用。我明白所有內容的使用都受使用條款的約束。我明白在 LlavaGuard 中重複使用內容在所有國家/地區和所有用例中可能並不合法。我明白 LlavaGuard 主要面向研究人員,旨在用於研究。LlavaGuard 作者保留撤銷我訪問此數據的權利。他們保留根據下架請求隨時修改此數據的權利。
💡 使用建議
在使用模型之前,請確保你已經閱讀並接受了相關的使用條款。同時,請確保在你的司法管轄區下載和使用 LlavaGuard 是合法的。
Clip Vit Large Patch14
CLIP是由OpenAI開發的視覺-語言模型,通過對比學習將圖像和文本映射到共享的嵌入空間,支持零樣本圖像分類
圖像生成文本
C
openai
44.7M
1,710
Clip Vit Base Patch32
CLIP是由OpenAI開發的多模態模型,能夠理解圖像和文本之間的關係,支持零樣本圖像分類任務。
圖像生成文本
C
openai
14.0M
666
Siglip So400m Patch14 384
Apache-2.0
SigLIP是基於WebLi數據集預訓練的視覺語言模型,採用改進的sigmoid損失函數,優化了圖像-文本匹配任務。
圖像生成文本
Transformers

S
google
6.1M
526
Clip Vit Base Patch16
CLIP是由OpenAI開發的多模態模型,通過對比學習將圖像和文本映射到共享的嵌入空間,實現零樣本圖像分類能力。
圖像生成文本
C
openai
4.6M
119
Blip Image Captioning Base
Bsd-3-clause
BLIP是一個先進的視覺-語言預訓練模型,擅長圖像描述生成任務,支持條件式和非條件式文本生成。
圖像生成文本
Transformers

B
Salesforce
2.8M
688
Blip Image Captioning Large
Bsd-3-clause
BLIP是一個統一的視覺-語言預訓練框架,擅長圖像描述生成任務,支持條件式和無條件式圖像描述生成。
圖像生成文本
Transformers

B
Salesforce
2.5M
1,312
Openvla 7b
MIT
OpenVLA 7B是一個基於Open X-Embodiment數據集訓練的開源視覺-語言-動作模型,能夠根據語言指令和攝像頭圖像生成機器人動作。
圖像生成文本
Transformers 英語

O
openvla
1.7M
108
Llava V1.5 7b
LLaVA 是一款開源多模態聊天機器人,基於 LLaMA/Vicuna 微調,支持圖文交互。
圖像生成文本
Transformers

L
liuhaotian
1.4M
448
Vit Gpt2 Image Captioning
Apache-2.0
這是一個基於ViT和GPT2架構的圖像描述生成模型,能夠為輸入圖像生成自然語言描述。
圖像生成文本
Transformers

V
nlpconnect
939.88k
887
Blip2 Opt 2.7b
MIT
BLIP-2是一個視覺語言模型,結合了圖像編碼器和大型語言模型,用於圖像到文本的生成任務。
圖像生成文本
Transformers 英語

B
Salesforce
867.78k
359
精選推薦AI模型
Llama 3 Typhoon V1.5x 8b Instruct
專為泰語設計的80億參數指令模型,性能媲美GPT-3.5-turbo,優化了應用場景、檢索增強生成、受限生成和推理任務
大型語言模型
Transformers 支持多種語言

L
scb10x
3,269
16
Cadet Tiny
Openrail
Cadet-Tiny是一個基於SODA數據集訓練的超小型對話模型,專為邊緣設備推理設計,體積僅為Cosmo-3B模型的2%左右。
對話系統
Transformers 英語

C
ToddGoldfarb
2,691
6
Roberta Base Chinese Extractive Qa
基於RoBERTa架構的中文抽取式問答模型,適用於從給定文本中提取答案的任務。
問答系統 中文
R
uer
2,694
98