S

Shieldgemma 2 4b It

Developed by google
ShieldGemma 2 is a model trained on Gemma 3's 4-billion-parameter IT checkpoint for cross-critical-category image safety classification, receiving images and outputting policy-compliant safety labels.
Downloads 1,987
Release Time : 3/4/2025

Model Overview

ShieldGemma 2 is a vision-language model for image content moderation, capable of identifying and classifying harmful content in images, including sexually explicit content, dangerous content, and violent/bloody content.

Model Features

Multi-Category Safety Review
Capable of identifying sexually explicit content, dangerous content, and violent/bloody content, covering multiple harmful image categories.
High Performance
Outperforms other similar models in internal benchmarks, with high precision and recall rates.
Easy Integration
Provides simple APIs and code examples for developers to quickly integrate into various application scenarios.

Model Capabilities

Image Safety Classification
Harmful Content Detection
Multi-Category Review

Use Cases

Content Moderation
Social Media Content Filtering
Used to automatically detect and filter harmful image content on social media platforms.
Improves content moderation efficiency and reduces manual review workload.
Image Generation System Output Filtering
Used to filter harmful images output by generative AI systems, ensuring content safety.
Enhances the safety of generated content, complying with platform policy requirements.
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase