L

Llavaguard V1.2 7B OV

Developed by AIML-TUDA
LlavaGuard is a safety assessment system based on a vision-language model, primarily used for safety classification and compliance checks of image content.
Downloads 193
Release Time : 11/7/2024

Model Overview

LlavaGuard is a safety assessment system based on a vision-language model, designed to evaluate the safety of user-provided image content and determine its compliance with predefined safety policy categories.

Model Features

32K Token Context Window
Supports a context window of up to 32K tokens, suitable for handling complex content assessment tasks.
Improved Inference Logic
Achieves the current best overall performance through improved inference logic.
Academic Research-Oriented
Primarily targeted at researchers, intended for use in academic research.

Model Capabilities

Image Content Safety Assessment
Multi-Category Policy Compliance Check
JSON Format Result Output

Use Cases

Content Moderation
Social Media Content Moderation
Used to automatically detect non-compliant image content on social media platforms.
Can identify hate speech, violence, and other non-compliant content
Academic Research
Multimodal Safety Research
Used to study the application of vision-language models in the field of content safety.
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase