+“ Living out everyone’s imagination on creating and manipulating 3D assets.”
+
+
+## 🔥 News
+
+- Jan 21, 2025: 💬 Release [Hunyuan3D 2.0](https://huggingface.co/spaces/tencent/Hunyuan3D-2). Please give it a try!
+
+## **Abstract**
+
+We present Hunyuan3D 2.0, an advanced large-scale 3D synthesis system for generating high-resolution textured 3D assets.
+This system includes two foundation components: a large-scale shape generation model - Hunyuan3D-DiT, and a large-scale
+texture synthesis model - Hunyuan3D-Paint.
+The shape generative model, built on a scalable flow-based diffusion transformer, aims to create geometry that properly
+aligns with a given condition image, laying a solid foundation for downstream applications.
+The texture synthesis model, benefiting from strong geometric and diffusion priors, produces high-resolution and vibrant
+texture maps for either generated or hand-crafted meshes.
+Furthermore, we build Hunyuan3D-Studio - a versatile, user-friendly production platform that simplifies the re-creation
+process of 3D assets. It allows both professional and amateur users to manipulate or even animate their meshes
+efficiently.
+We systematically evaluate our models, showing that Hunyuan3D 2.0 outperforms previous state-of-the-art models,
+including the open-source models and closed-source models in geometry details, condition alignment, texture quality, and
+e.t.c.
+
+
+
+
+
+
+
+## ☯️ **Hunyuan3D 2.0**
+
+### Architecture
+
+Hunyuan3D 2.0 features a two-stage generation pipeline, starting with the creation of a bare mesh, followed by the
+synthesis of a texture map for that mesh. This strategy is effective for decoupling the difficulties of shape and
+texture generation and also provides flexibility for texturing either generated or handcrafted meshes.
+
+
+
+
+
+### Performance
+
+We have evaluated Hunyuan3D 2.0 with other open-source as well as close-source 3d-generation methods.
+The numerical results indicate that Hunyuan3D 2.0 surpasses all baselines in the quality of generated textured 3D assets
+and the condition following ability.
+
+| Model | CMMD(⬇) | FID_CLIP(⬇) | FID(⬇) | CLIP-score(⬆) |
+|-------------------------|-----------|-------------|-------------|---------------|
+| Top Open-source Model1 | 3.591 | 54.639 | 289.287 | 0.787 |
+| Top Close-source Model1 | 3.600 | 55.866 | 305.922 | 0.779 |
+| Top Close-source Model2 | 3.368 | 49.744 | 294.628 | 0.806 |
+| Top Close-source Model3 | 3.218 | 51.574 | 295.691 | 0.799 |
+| Hunyuan3D 2.0 | **3.193** | **49.165** | **282.429** | **0.809** |
+
+Generation results of Hunyuan3D 2.0:
+
+
+
+
+
+### Pretrained Models
+
+| Model | Date | Huggingface |
+|----------------------|------------|--------------------------------------------------------|
+| Hunyuan3D-DiT-v2-0 | 2025-01-21 | [Download](https://huggingface.co/tencent/Hunyuan3D-2) |
+| Hunyuan3D-Paint-v2-0 | 2025-01-21 | [Download](https://huggingface.co/tencent/Hunyuan3D-2) |
+
+## 🤗 Get Started with Hunyuan3D 2.0
+
+You may follow the next steps to use Hunyuan3D 2.0 via code or the Gradio App.
+
+### Install Requirements
+
+Please install Pytorch via the [official](https://pytorch.org/) site. Then install the other requirements via
+
+```bash
+pip install -r requirements.txt
+# for texture
+cd hy3dgen/texgen/custom_rasterizer
+python3 setup.py install
+cd hy3dgen/texgen/differentiable_renderer
+bash compile_mesh_painter.sh
+```
+
+### API Usage
+
+We designed a diffusers-like API to use our shape generation model - Hunyuan3D-DiT and texture synthesis model -
+Hunyuan3D-Paint.
+
+You could assess **Hunyuan3D-DiT** via:
+
+```python
+from hy3dgen.shapegen import Hunyuan3DDiTFlowMatchingPipeline
+
+pipeline = Hunyuan3DDiTFlowMatchingPipeline.from_pretrained('tencent/Hunyuan3D-2')
+mesh = pipeline(image='assets/demo.png')[0]
+```
+
+The output mesh is a [trimesh object](https://trimesh.org/trimesh.html), which you could save to glb/obj (or other
+format) file.
+
+For **Hunyuan3D-Paint**, do the following:
+
+```python
+from hy3dgen.texgen import Hunyuan3DPaintPipeline
+from hy3dgen.shapegen import Hunyuan3DDiTFlowMatchingPipeline
+
+# let's generate a mesh first
+pipeline = Hunyuan3DDiTFlowMatchingPipeline.from_pretrained('tencent/Hunyuan3D-2')
+mesh = pipeline(image='assets/demo.png')[0]
+
+pipeline = Hunyuan3DPaintPipeline.from_pretrained('tencent/Hunyuan3D-2')
+mesh = pipeline(mesh, image='assets/demo.png')
+```
+
+Please visit [minimal_demo.py](minimal_demo.py) for more advanced usage, such as **text to 3D** and **texture generation
+for handcrafted mesh**.
+
+### Gradio App
+
+You could also host a [Gradio](https://www.gradio.app/) App in your own computer via:
+
+```bash
+pip3 install gradio==3.39.0
+python3 gradio_app.py
+```
+
+Don't forget to visit [Hunyuan3D](https://3d.hunyuan.tencent.com) for quick use, if you don't want to host yourself.
+
+## 📑 Open-Source Plan
+
+- [x] Inference Code
+- [x] Model Checkpoints
+- [ ] ComfyUI
+- [ ] TensorRT Version
+
+## 🔗 BibTeX
+
+If you found this repository helpful, please cite our report:
+
+```bibtex
+@misc{hunyuan3d22025tencent,
+ title={Hunyuan3D 2.0: Scaling Diffusion Models for High Resolution Textured 3D Assets Generation},
+ author={Tencent Hunyuan3D Team},
+ year={2025},
+}
+```
+
+## Acknowledgements
+
+We would like to thank the contributors to
+the [DINOv2](https://github.com/facebookresearch/dinov2), [Stable Diffusion](https://github.com/Stability-AI/stablediffusion), [FLUX](https://github.com/black-forest-labs/flux), [diffusers](https://github.com/huggingface/diffusers)
+and [HuggingFace](https://huggingface.co) repositories, for their open research and exploration.
+
+## Star History
+
+
+
+
diff --git a/.ipynb_checkpoints/gradio_app-checkpoint.py b/.ipynb_checkpoints/gradio_app-checkpoint.py
new file mode 100644
index 0000000000000000000000000000000000000000..5b95833f15055cf265f0a9f6623a57614f1efe08
--- /dev/null
+++ b/.ipynb_checkpoints/gradio_app-checkpoint.py
@@ -0,0 +1,392 @@
+import os
+import shutil
+import time
+from glob import glob
+from pathlib import Path
+
+import gradio as gr
+import torch
+import uvicorn
+from fastapi import FastAPI
+from fastapi.staticfiles import StaticFiles
+
+
+def get_example_img_list():
+ print('Loading example img list ...')
+ return sorted(glob('./assets/example_images/*.png'))
+
+
+def get_example_txt_list():
+ print('Loading example txt list ...')
+ txt_list = list()
+ for line in open('./assets/example_prompts.txt'):
+ txt_list.append(line.strip())
+ return txt_list
+
+
+def gen_save_folder(max_size=60):
+ os.makedirs(SAVE_DIR, exist_ok=True)
+ exists = set(int(_) for _ in os.listdir(SAVE_DIR) if not _.startswith("."))
+ cur_id = min(set(range(max_size)) - exists) if len(exists) < max_size else -1
+ if os.path.exists(f"{SAVE_DIR}/{(cur_id + 1) % max_size}"):
+ shutil.rmtree(f"{SAVE_DIR}/{(cur_id + 1) % max_size}")
+ print(f"remove {SAVE_DIR}/{(cur_id + 1) % max_size} success !!!")
+ save_folder = f"{SAVE_DIR}/{max(0, cur_id)}"
+ os.makedirs(save_folder, exist_ok=True)
+ print(f"mkdir {save_folder} suceess !!!")
+ return save_folder
+
+
+def export_mesh(mesh, save_folder, textured=False):
+ if textured:
+ path = os.path.join(save_folder, f'textured_mesh.glb')
+ else:
+ path = os.path.join(save_folder, f'white_mesh.glb')
+ mesh.export(path, include_normals=textured)
+ return path
+
+
+def build_model_viewer_html(save_folder, height=660, width=790, textured=False):
+ if textured:
+ related_path = f"./textured_mesh.glb"
+ template_name = './assets/modelviewer-textured-template.html'
+ output_html_path = os.path.join(save_folder, f'textured_mesh.html')
+ else:
+ related_path = f"./white_mesh.glb"
+ template_name = './assets/modelviewer-template.html'
+ output_html_path = os.path.join(save_folder, f'white_mesh.html')
+
+ with open(os.path.join(CURRENT_DIR, template_name), 'r') as f:
+ template_html = f.read()
+ obj_html = f"""
+