In2in
in2IN is a novel diffusion model that generates high-quality human interactions by combining holistic interaction descriptions with individual motion descriptions.
Downloads 47
Release Time : 6/13/2024
Model Overview
This model specializes in generating human interaction motions by integrating both overall interaction descriptions and individual motion details, addressing the limitations of existing methods in individual dynamic diversity.
Model Features
Enhanced Individual Motion Descriptions
Receives not only overall interaction descriptions but also individual motion descriptions for each participant, improving motion diversity.
DualMDM Technology
Combines in2IN-generated motions with single-agent motion priors to enhance individual diversity while maintaining interpersonal coordination.
Dataset Expansion
Uses large language models to extend the InterHuman dataset to include individual descriptions.
Model Capabilities
Text-to-3D Motion Generation
Multi-agent Interaction Synthesis
Individual Motion Control
Use Cases
Virtual Reality & Gaming
Game Character Animation
Generates natural interaction motions between NPCs in games.
Enhances realism and diversity in game scenarios.
Robotics Interaction
Human-Robot Interaction Training
Generates diverse human interaction data for robot training.
Improves robots' understanding and response to human motions.
Featured Recommended AI Models