L

Llavaguard 7B

Developed by AIML-TUDA
LlavaGuard is a dataset safety evaluation and assurance framework based on visual-language models, primarily used for content safety evaluation.
Downloads 64
Release Time : 6/1/2024

Model Overview

LlavaGuard is a safety evaluation framework based on visual-language models, designed to assess user-provided content for compliance with predefined safety policy categories.

Model Features

Multi-category Safety Evaluation
Supports evaluation across multiple safety policy categories, including hate speech, violence, inappropriate content, etc.
JSON Format Output
Evaluation results are output in structured JSON format for easy integration and processing.
Academic Research Oriented
Primarily targeted at researchers, suitable for academic research scenarios.

Model Capabilities

Image Content Safety Evaluation
Text Content Safety Evaluation
Multimodal Content Analysis

Use Cases

Content Moderation
Social Media Content Moderation
Used for automatically detecting non-compliant content on social media platforms.
Can identify hate speech, violence, and other non-compliant content.
Academic Research
Safety Policy Research
Used for studying the effectiveness and applicability of different safety policies.
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
Š 2025AIbase