đ MeshAnything: Artist-Created Mesh Generation with Autoregressive Transformers
MeshAnything is a project focused on mesh generation using autoregressive transformers. It provides various methods for mesh and point cloud inference, enabling users to generate meshes from different types of inputs.
đ Quick Start
Installation
Our environment has been tested on Ubuntu 22, CUDA 11.8 with A100, A800 and A6000.
git clone https://github.com/buaacyw/MeshAnything.git && cd MeshAnything
conda create -n MeshAnything python==3.10.13
conda activate MeshAnything
pip install torch==2.1.1 torchvision==0.16.1 torchaudio==2.1.1 --index-url https://download.pytorch.org/whl/cu118
pip install -r requirements.txt
pip install flash-attn --no-build-isolation
Usage
Local Gradio Demo 
python app.py
Mesh Command line inference
python main.py --input_dir examples --out_dir mesh_output --input_type mesh
python main.py --input_path examples/wand.ply --out_dir mesh_output --input_type mesh
python main.py --input_dir examples --out_dir mesh_output --input_type mesh --mc
Point Cloud Command line inference
python main.py --input_dir pc_examples --out_dir pc_output --input_type pc_normal
python main.py --input_dir pc_examples/mouse.npy --out_dir pc_output --input_type pc_normal
⨠Features
- Mesh Generation: Generate meshes from images using autoregressive transformers.
- Multiple Input Types: Support both mesh and point cloud inputs.
- Command Line Inference: Provide command line interfaces for easy inference.
đĻ Installation
The installation steps are provided in the Quick Start section.
đģ Usage Examples
Basic Usage
The basic usage examples are provided in the Quick Start section.
Advanced Usage
There is no specific advanced usage example provided in the original document.
đ Documentation
Important Notes
- It takes about 7GB and 30s to generate a mesh on an A6000 GPU.
- The input mesh will be normalized to a unit bounding box. The up vector of the input mesh should be +Y for better results.
- Limited by computational resources, MeshAnything is trained on meshes with fewer than 800 faces and cannot generate meshes with more than 800 faces. The shape of the input mesh should be sharp enough; otherwise, it will be challenging to represent it with only 800 faces. Thus, feed-forward image-to-3D methods may often produce bad results due to insufficient shape quality. We suggest using results from 3D reconstruction, scanning and sds-based method (like DreamCraft3D) as the input of MeshAnything.
- Please refer to https://huggingface.co/spaces/Yiwen-ntu/MeshAnything/tree/main/examples for more examples.
TODO
The repo is still being under construction, thanks for your patience.
- [ ] Release of training code.
- [ ] Release of larger model.
Acknowledgement
Our code is based on these wonderful repos:
BibTeX
@misc{chen2024meshanything,
title={MeshAnything: Artist-Created Mesh Generation with Autoregressive Transformers},
author={Yiwen Chen and Tong He and Di Huang and Weicai Ye and Sijin Chen and Jiaxiang Tang and Xin Chen and Zhongang Cai and Lei Yang and Gang Yu and Guosheng Lin and Chi Zhang},
year={2024},
eprint={2406.10163},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
đ License
There is no license information provided in the original document.