Hunyuan 3D 2.0 is a 3D generation system that includes a two-stage process of shape generation and texture synthesis, capable of producing high-quality 3D models from image or text inputs
Model Features
Two-stage generation process
First generates a base mesh, then synthesizes texture maps for the mesh, effectively decoupling the difficulty of shape and texture generation
High-resolution textures
Capable of generating high-resolution, vivid texture maps to enhance the visual quality of 3D assets
Multimodal input support
Supports both images and text as input conditions, flexibly adapting to different creative needs
Professional production platform
Provides the Hunyuan 3D-Studio platform to streamline the 3D asset recreation process, supporting mesh editing and animation
Model Capabilities
Image-to-3D generation
Text-to-3D generation
High-resolution texture synthesis
3D mesh editing
3D asset animation
Use Cases
Game development
Rapid game asset generation
Quickly generate 3D game characters or scenes from concept art
Significantly shortens the 3D asset creation cycle
Film production
Pre-visualization
Quickly generate 3D preview models for film scenes
Accelerates pre-production workflows
Virtual reality
VR content creation
Rapidly create 3D environments for VR experiences
Lowers the barrier to VR content production
๐ Hunyuan3D-2
Living out everyoneโs imagination on creating and manipulating 3D assets.
โ Living out everyoneโs imagination on creating and manipulating 3D assets.โ
You may follow the next steps to use Hunyuan3D 2.0 via code or the Gradio App.
๐ฆ Installation
Please install Pytorch via the official site. Then install the other requirements via
pip install -r requirements.txt
# for texturecd hy3dgen/texgen/custom_rasterizer
python3 setup.py install
cd ../../..
cd hy3dgen/texgen/differentiable_renderer
bash compile_mesh_painter.sh OR python3 setup.py install (on Windows)
๐ป Usage Examples
๐ Basic Usage
We designed a diffusers-like API to use our shape generation model - Hunyuan3D-DiT and texture synthesis model - Hunyuan3D-Paint.
You could assess Hunyuan3D-DiT via:
from hy3dgen.shapegen import Hunyuan3DDiTFlowMatchingPipeline
pipeline = Hunyuan3DDiTFlowMatchingPipeline.from_pretrained('tencent/Hunyuan3D-2')
mesh = pipeline(image='assets/demo.png')[0]
The output mesh is a trimesh object, which you could save to glb/obj (or other format) file.
For Hunyuan3D-Paint, do the following:
from hy3dgen.texgen import Hunyuan3DPaintPipeline
from hy3dgen.shapegen import Hunyuan3DDiTFlowMatchingPipeline
# let's generate a mesh first
pipeline = Hunyuan3DDiTFlowMatchingPipeline.from_pretrained('tencent/Hunyuan3D-2')
mesh = pipeline(image='assets/demo.png')[0]
pipeline = Hunyuan3DPaintPipeline.from_pretrained('tencent/Hunyuan3D-2')
mesh = pipeline(mesh, image='assets/demo.png')
๐ก Advanced Usage
Please visit minimal_demo.py for more advanced usage, such as text to 3D and texture generation for handcrafted mesh.
Gradio App
You could also host a Gradio App in your own computer via:
pip3 install gradio==3.39.0
python3 gradio_app.py
Don't forget to visit Hunyuan3D for quick use, if you don't want to host yourself.
โจ Features
We present Hunyuan3D 2.0, an advanced large-scale 3D synthesis system for generating high-resolution textured 3D assets. This system includes two foundation components: a large-scale shape generation model - Hunyuan3D-DiT, and a large-scale texture synthesis model - Hunyuan3D-Paint. The shape generative model, built on a scalable flow-based diffusion transformer, aims to create geometry that properly aligns with a given condition image, laying a solid foundation for downstream applications. The texture synthesis model, benefiting from strong geometric and diffusion priors, produces high-resolution and vibrant texture maps for either generated or hand-crafted meshes. Furthermore, we build Hunyuan3D-Studio - a versatile, user-friendly production platform that simplifies the re-creation process of 3D assets. It allows both professional and amateur users to manipulate or even animate their meshes efficiently.
๐ Documentation
โฏ๏ธ Hunyuan3D 2.0
Architecture
Hunyuan3D 2.0 features a two-stage generation pipeline, starting with the creation of a bare mesh, followed by the synthesis of a texture map for that mesh. This strategy is effective for decoupling the difficulties of shape and texture generation and also provides flexibility for texturing either generated or handcrafted meshes.
Performance
We have evaluated Hunyuan3D 2.0 with other open-source as well as close-source 3d-generation methods. The numerical results indicate that Hunyuan3D 2.0 surpasses all baselines in the quality of generated textured 3D assets and the condition following ability.
Jan 21, 2025: ๐ฌ Release Hunyuan3D 2.0. Please give it a try!
๐ Open-Source Plan
[x] Inference Code
[x] Model Checkpoints
[x] Technical Report
[ ] ComfyUI
[ ] TensorRT Version
๐ง Technical Details
The shape generative model, Hunyuan3D-DiT, is built on a scalable flow-based diffusion transformer, aiming to create geometry that properly aligns with a given condition image. The texture synthesis model, Hunyuan3D-Paint, benefits from strong geometric and diffusion priors to produce high-resolution and vibrant texture maps for meshes.
๐ BibTeX
If you found this repository helpful, please cite our report:
@misc{hunyuan3d22025tencent,
title={Hunyuan3D 2.0: Scaling Diffusion Models for High Resolution Textured 3D Assets Generation},
author={Tencent Hunyuan3D Team},
year={2025},
eprint={2501.12202},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
@misc{yang2024tencent,
title={Tencent Hunyuan3D-1.0: A Unified Framework for Text-to-3D and Image-to-3D Generation},
author={Tencent Hunyuan3D Team},
year={2024},
eprint={2411.02293},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
Community Resources
Thanks for the contributions of community members, here we have these great extensions of Hunyuan3D 2.0: