L

Llavaguard V1.2 0.5B OV

Developed by AIML-TUDA
LlavaGuard is a safety evaluation guardian based on vision-language models, primarily used for safety classification and violation detection of image content.
Downloads 239
Release Time : 11/22/2024

Model Overview

LlavaGuard is a lightweight vision-language model designed for safety evaluation of user-provided content, determining whether it violates preset safety policies.

Model Features

Efficient and Lightweight
Optimizes inference efficiency while maintaining strong performance, with a parameter size of only 0.5B.
Large Context Window
Supports a context window of 32K tokens, suitable for processing long texts and complex content.
Multi-Policy Classification
Supports classification evaluation for 9 major safety policies, including hate content, violence, sexual content, etc.

Model Capabilities

Image Safety Evaluation
Multi-Policy Classification
JSON Format Output

Use Cases

Content Moderation
Social Media Content Moderation
Automatically detects whether user-uploaded images contain prohibited content.
Can identify 9 major categories of prohibited content with high accuracy.
Academic Research
Safety Policy Research
Used to study the effectiveness of different safety policies in real-world content moderation.
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
Š 2025AIbase