C

Controlnetmediapipeface

Developed by stablediffusionapi
ControlNet model trained on the LAION face dataset for generating images with precise facial expression control
Downloads 15
Release Time : 6/20/2023

Model Overview

This model uses the MediaPipe face detector to generate facial keypoint annotations, training a ControlNet capable of precisely controlling facial expressions and gaze directions in generated images. Supports multi-face scenarios and can be applied in portrait editing, advertising design, and other fields.

Model Features

Precise facial control
Controls gaze direction through pupil keypoints and supports adjustments for eyebrows, eyes, mouth, and other facial features
Multi-face support
Can process multiple faces in an image simultaneously while maintaining independent expression control for each
Dual-version compatibility
Supports both Stable Diffusion 2.1-base and 1.5 versions to meet different needs

Model Capabilities

Facial expression control
Gaze direction adjustment
Multi-face image generation
Image-to-image transformation

Use Cases

Advertising design
Customized ad character expressions
Generates character images with specific expressions based on advertising needs
Samples successfully demonstrate ad images with various expressions like happiness and surprise
Portrait editing
Gaze direction correction
Adjusts the gaze direction of subjects in photos
The model can precisely control pupil positions to alter gaze direction
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase