L

Llavaguard V1.2 7B OV Hf

Developed by AIML-TUDA
LlavaGuard is a vision-language model-based safety assessment system for security classification and compliance checking of image content.
Downloads 1,148
Release Time : 11/11/2024

Model Overview

LlavaGuard is a vision-language model-based safety assessment system primarily used for security classification and compliance checking of image content. It is based on the llava-onevision-qwen2-7b-ov model and trained on the LlavaGuard-DS dataset, suitable for academic research purposes.

Model Features

Multi-category Safety Assessment
Supports evaluation across 9 safety policy categories, including hate speech and violent content.
Academic Research-Oriented
Primarily designed for researchers and academic study purposes.
Improved Reasoning Capabilities
Achieves state-of-the-art comprehensive performance in evaluation benchmarks.

Model Capabilities

Image Content Analysis
Safety Compliance Assessment
Multi-category Classification
JSON Format Output

Use Cases

Content Moderation
Social Media Content Moderation
Automatically identifies non-compliant image content on social media.
Flags image content that violates safety policies.
Academic Research
Visual Content Safety Research
Used for studying safety classification methods for image content.
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase