L

Llavaguard V1.2 0.5B OV Hf

Developed by AIML-TUDA
LlavaGuard-v1.2-0.5B-OV is an image-text-based model focused on content security assessment and designed for researchers.
Downloads 1,945
Release Time : 11/22/2024

Model Overview

This model is used to evaluate whether the content provided by users complies with security policies and supports the detection of multiple security categories, such as hate speech, violent content, and sexual content.

Model Features

Efficient inference
As the smallest model version, it achieves more efficient inference while maintaining strong performance.
Large context window
Based on the llava-onevision-qwen2-0.5b-ov model, it has a context window of 32K tokens.
Multi-category security assessment
Supports the detection of multiple security categories, including hate speech, violent content, and sexual content.

Model Capabilities

Image-text security assessment
Multi-category content detection
Efficient inference

Use Cases

Content security
Social media content review
Used to automatically detect inappropriate content on social media, such as hate speech and violent content.
Provides a security rating and violation categories to help quickly identify and handle违规 content.
Educational content review
Evaluates whether educational content complies with security policies to ensure that the content is suitable for students.
Provides a security rating and reasons to help educational institutions screen suitable content.
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase