diff --git a/.gitattributes b/.gitattributes index 2c690979911f3e7f1637d167724d3ac8aa0dd998..cc4526ca08f2bd8f3f657378a365aa286c8d599d 100644 --- a/.gitattributes +++ b/.gitattributes @@ -43,3 +43,4 @@ gradio_cache/3/textured_mesh.glb filter=lfs diff=lfs merge=lfs -text gradio_cache/4/textured_mesh.glb filter=lfs diff=lfs merge=lfs -text gradio_cache/5/textured_mesh.glb filter=lfs diff=lfs merge=lfs -text *.whl filter=lfs diff=lfs merge=lfs -text +gradio_cache/1/textured_mesh.glb filter=lfs diff=lfs merge=lfs -text diff --git a/.ipynb_checkpoints/README-checkpoint.md b/.ipynb_checkpoints/README-checkpoint.md new file mode 100644 index 0000000000000000000000000000000000000000..ccbc080911bbf184e02b42880b4d8c6bdf485aa2 --- /dev/null +++ b/.ipynb_checkpoints/README-checkpoint.md @@ -0,0 +1,201 @@ +--- +title: Hunyuan3D-2.0 +emoji: 🌍 +colorFrom: purple +colorTo: red +sdk: gradio +sdk_version: 4.44.1 +app_file: hg_app.py +pinned: false +short_description: Text-to-3D and Image-to-3D Generation +--- + +[中文阅读](README_zh_cn.md) + +

+ + + +

+ +
+ + + + + +
+ + +[//]: # ( ) + +[//]: # ( ) + +[//]: # ( PyPI - Downloads) + +
+

+“ Living out everyone’s imagination on creating and manipulating 3D assets.” +

+ +## 🔥 News + +- Jan 21, 2025: 💬 Release [Hunyuan3D 2.0](https://huggingface.co/spaces/tencent/Hunyuan3D-2). Please give it a try! + +## **Abstract** + +We present Hunyuan3D 2.0, an advanced large-scale 3D synthesis system for generating high-resolution textured 3D assets. +This system includes two foundation components: a large-scale shape generation model - Hunyuan3D-DiT, and a large-scale +texture synthesis model - Hunyuan3D-Paint. +The shape generative model, built on a scalable flow-based diffusion transformer, aims to create geometry that properly +aligns with a given condition image, laying a solid foundation for downstream applications. +The texture synthesis model, benefiting from strong geometric and diffusion priors, produces high-resolution and vibrant +texture maps for either generated or hand-crafted meshes. +Furthermore, we build Hunyuan3D-Studio - a versatile, user-friendly production platform that simplifies the re-creation +process of 3D assets. It allows both professional and amateur users to manipulate or even animate their meshes +efficiently. +We systematically evaluate our models, showing that Hunyuan3D 2.0 outperforms previous state-of-the-art models, +including the open-source models and closed-source models in geometry details, condition alignment, texture quality, and +e.t.c. + + + +

+ +

+ +## ☯️ **Hunyuan3D 2.0** + +### Architecture + +Hunyuan3D 2.0 features a two-stage generation pipeline, starting with the creation of a bare mesh, followed by the +synthesis of a texture map for that mesh. This strategy is effective for decoupling the difficulties of shape and +texture generation and also provides flexibility for texturing either generated or handcrafted meshes. + +

+ +

+ +### Performance + +We have evaluated Hunyuan3D 2.0 with other open-source as well as close-source 3d-generation methods. +The numerical results indicate that Hunyuan3D 2.0 surpasses all baselines in the quality of generated textured 3D assets +and the condition following ability. + +| Model | CMMD(⬇) | FID_CLIP(⬇) | FID(⬇) | CLIP-score(⬆) | +|-------------------------|-----------|-------------|-------------|---------------| +| Top Open-source Model1 | 3.591 | 54.639 | 289.287 | 0.787 | +| Top Close-source Model1 | 3.600 | 55.866 | 305.922 | 0.779 | +| Top Close-source Model2 | 3.368 | 49.744 | 294.628 | 0.806 | +| Top Close-source Model3 | 3.218 | 51.574 | 295.691 | 0.799 | +| Hunyuan3D 2.0 | **3.193** | **49.165** | **282.429** | **0.809** | + +Generation results of Hunyuan3D 2.0: +

+ + +

+ +### Pretrained Models + +| Model | Date | Huggingface | +|----------------------|------------|--------------------------------------------------------| +| Hunyuan3D-DiT-v2-0 | 2025-01-21 | [Download](https://huggingface.co/tencent/Hunyuan3D-2) | +| Hunyuan3D-Paint-v2-0 | 2025-01-21 | [Download](https://huggingface.co/tencent/Hunyuan3D-2) | + +## 🤗 Get Started with Hunyuan3D 2.0 + +You may follow the next steps to use Hunyuan3D 2.0 via code or the Gradio App. + +### Install Requirements + +Please install Pytorch via the [official](https://pytorch.org/) site. Then install the other requirements via + +```bash +pip install -r requirements.txt +# for texture +cd hy3dgen/texgen/custom_rasterizer +python3 setup.py install +cd hy3dgen/texgen/differentiable_renderer +bash compile_mesh_painter.sh +``` + +### API Usage + +We designed a diffusers-like API to use our shape generation model - Hunyuan3D-DiT and texture synthesis model - +Hunyuan3D-Paint. + +You could assess **Hunyuan3D-DiT** via: + +```python +from hy3dgen.shapegen import Hunyuan3DDiTFlowMatchingPipeline + +pipeline = Hunyuan3DDiTFlowMatchingPipeline.from_pretrained('tencent/Hunyuan3D-2') +mesh = pipeline(image='assets/demo.png')[0] +``` + +The output mesh is a [trimesh object](https://trimesh.org/trimesh.html), which you could save to glb/obj (or other +format) file. + +For **Hunyuan3D-Paint**, do the following: + +```python +from hy3dgen.texgen import Hunyuan3DPaintPipeline +from hy3dgen.shapegen import Hunyuan3DDiTFlowMatchingPipeline + +# let's generate a mesh first +pipeline = Hunyuan3DDiTFlowMatchingPipeline.from_pretrained('tencent/Hunyuan3D-2') +mesh = pipeline(image='assets/demo.png')[0] + +pipeline = Hunyuan3DPaintPipeline.from_pretrained('tencent/Hunyuan3D-2') +mesh = pipeline(mesh, image='assets/demo.png') +``` + +Please visit [minimal_demo.py](minimal_demo.py) for more advanced usage, such as **text to 3D** and **texture generation +for handcrafted mesh**. + +### Gradio App + +You could also host a [Gradio](https://www.gradio.app/) App in your own computer via: + +```bash +pip3 install gradio==3.39.0 +python3 gradio_app.py +``` + +Don't forget to visit [Hunyuan3D](https://3d.hunyuan.tencent.com) for quick use, if you don't want to host yourself. + +## 📑 Open-Source Plan + +- [x] Inference Code +- [x] Model Checkpoints +- [ ] ComfyUI +- [ ] TensorRT Version + +## 🔗 BibTeX + +If you found this repository helpful, please cite our report: + +```bibtex +@misc{hunyuan3d22025tencent, + title={Hunyuan3D 2.0: Scaling Diffusion Models for High Resolution Textured 3D Assets Generation}, + author={Tencent Hunyuan3D Team}, + year={2025}, +} +``` + +## Acknowledgements + +We would like to thank the contributors to +the [DINOv2](https://github.com/facebookresearch/dinov2), [Stable Diffusion](https://github.com/Stability-AI/stablediffusion), [FLUX](https://github.com/black-forest-labs/flux), [diffusers](https://github.com/huggingface/diffusers) +and [HuggingFace](https://huggingface.co) repositories, for their open research and exploration. + +## Star History + + + + + + Star History Chart + + diff --git a/.ipynb_checkpoints/gradio_app-checkpoint.py b/.ipynb_checkpoints/gradio_app-checkpoint.py new file mode 100644 index 0000000000000000000000000000000000000000..5b95833f15055cf265f0a9f6623a57614f1efe08 --- /dev/null +++ b/.ipynb_checkpoints/gradio_app-checkpoint.py @@ -0,0 +1,392 @@ +import os +import shutil +import time +from glob import glob +from pathlib import Path + +import gradio as gr +import torch +import uvicorn +from fastapi import FastAPI +from fastapi.staticfiles import StaticFiles + + +def get_example_img_list(): + print('Loading example img list ...') + return sorted(glob('./assets/example_images/*.png')) + + +def get_example_txt_list(): + print('Loading example txt list ...') + txt_list = list() + for line in open('./assets/example_prompts.txt'): + txt_list.append(line.strip()) + return txt_list + + +def gen_save_folder(max_size=60): + os.makedirs(SAVE_DIR, exist_ok=True) + exists = set(int(_) for _ in os.listdir(SAVE_DIR) if not _.startswith(".")) + cur_id = min(set(range(max_size)) - exists) if len(exists) < max_size else -1 + if os.path.exists(f"{SAVE_DIR}/{(cur_id + 1) % max_size}"): + shutil.rmtree(f"{SAVE_DIR}/{(cur_id + 1) % max_size}") + print(f"remove {SAVE_DIR}/{(cur_id + 1) % max_size} success !!!") + save_folder = f"{SAVE_DIR}/{max(0, cur_id)}" + os.makedirs(save_folder, exist_ok=True) + print(f"mkdir {save_folder} suceess !!!") + return save_folder + + +def export_mesh(mesh, save_folder, textured=False): + if textured: + path = os.path.join(save_folder, f'textured_mesh.glb') + else: + path = os.path.join(save_folder, f'white_mesh.glb') + mesh.export(path, include_normals=textured) + return path + + +def build_model_viewer_html(save_folder, height=660, width=790, textured=False): + if textured: + related_path = f"./textured_mesh.glb" + template_name = './assets/modelviewer-textured-template.html' + output_html_path = os.path.join(save_folder, f'textured_mesh.html') + else: + related_path = f"./white_mesh.glb" + template_name = './assets/modelviewer-template.html' + output_html_path = os.path.join(save_folder, f'white_mesh.html') + + with open(os.path.join(CURRENT_DIR, template_name), 'r') as f: + template_html = f.read() + obj_html = f""" +
+ + +
+ """ + + with open(output_html_path, 'w') as f: + f.write(template_html.replace('', obj_html)) + + output_html_path = output_html_path.replace(SAVE_DIR + '/', '') + iframe_tag = f'' + print(f'Find html {output_html_path}, {os.path.exists(output_html_path)}') + + return f""" +
+ {iframe_tag} +
+ """ + + +def _gen_shape( + caption, + image, + steps=50, + guidance_scale=7.5, + seed=1234, + octree_resolution=256, + check_box_rembg=False, +): + if caption: print('prompt is', caption) + save_folder = gen_save_folder() + stats = {} + time_meta = {} + start_time_0 = time.time() + + if image is None: + start_time = time.time() + try: + image = t2i_worker(caption) + except Exception as e: + raise gr.Error(f"Text to 3D is disable. Please enable it by `python gradio_app.py --enable_t23d`.") + time_meta['text2image'] = time.time() - start_time + + image.save(os.path.join(save_folder, 'input.png')) + + print(image.mode) + if check_box_rembg or image.mode == "RGB": + start_time = time.time() + image = rmbg_worker(image.convert('RGB')) + time_meta['rembg'] = time.time() - start_time + + image.save(os.path.join(save_folder, 'rembg.png')) + + # image to white model + start_time = time.time() + + generator = torch.Generator() + generator = generator.manual_seed(int(seed)) + mesh = i23d_worker( + image=image, + num_inference_steps=steps, + guidance_scale=guidance_scale, + generator=generator, + octree_resolution=octree_resolution + )[0] + + mesh = FloaterRemover()(mesh) + mesh = DegenerateFaceRemover()(mesh) + mesh = FaceReducer()(mesh) + + stats['number_of_faces'] = mesh.faces.shape[0] + stats['number_of_vertices'] = mesh.vertices.shape[0] + + time_meta['image_to_textured_3d'] = {'total': time.time() - start_time} + time_meta['total'] = time.time() - start_time_0 + stats['time'] = time_meta + return mesh, save_folder + + +def generation_all( + caption, + image, + steps=50, + guidance_scale=7.5, + seed=1234, + octree_resolution=256, + check_box_rembg=False +): + mesh, save_folder = _gen_shape( + caption, + image, + steps=steps, + guidance_scale=guidance_scale, + seed=seed, + octree_resolution=octree_resolution, + check_box_rembg=check_box_rembg + ) + path = export_mesh(mesh, save_folder, textured=False) + model_viewer_html = build_model_viewer_html(save_folder, height=596, width=700) + + textured_mesh = texgen_worker(mesh, image) + path_textured = export_mesh(textured_mesh, save_folder, textured=True) + model_viewer_html_textured = build_model_viewer_html(save_folder, height=596, width=700, textured=True) + + return ( + gr.update(value=path, visible=True), + gr.update(value=path_textured, visible=True), + model_viewer_html, + model_viewer_html_textured, + ) + + +def shape_generation( + caption, + image, + steps=50, + guidance_scale=7.5, + seed=1234, + octree_resolution=256, + check_box_rembg=False, +): + mesh, save_folder = _gen_shape( + caption, + image, + steps=steps, + guidance_scale=guidance_scale, + seed=seed, + octree_resolution=octree_resolution, + check_box_rembg=check_box_rembg + ) + + path = export_mesh(mesh, save_folder, textured=False) + model_viewer_html = build_model_viewer_html(save_folder, height=596, width=700) + + return ( + gr.update(value=path, visible=True), + model_viewer_html, + ) + + +def build_app(): + title_html = """ +
+ + Hunyuan3D-2: Scaling Diffusion Models for High Resolution Textured 3D Assets Generation +
+
+ Tencent Hunyuan3D Team +
+
+ Github Page   + Homepage   + Technical Report   + Models   +
+ """ + + with gr.Blocks(theme=gr.themes.Base(), title='Hunyuan-3D-2.0') as demo: + gr.HTML(title_html) + + with gr.Row(): + with gr.Column(scale=2): + with gr.Tabs() as tabs_prompt: + with gr.Tab('Image Prompt', id='tab_img_prompt') as tab_ip: + image = gr.Image(label='Image', type='pil', image_mode='RGBA', height=290) + with gr.Row(): + check_box_rembg = gr.Checkbox(value=True, label='Remove Background') + + with gr.Tab('Text Prompt', id='tab_txt_prompt', visible=HAS_T2I) as tab_tp: + caption = gr.Textbox(label='Text Prompt', + placeholder='HunyuanDiT will be used to generate image.', + info='Example: A 3D model of a cute cat, white background') + + with gr.Accordion('Advanced Options', open=False): + num_steps = gr.Slider(maximum=50, minimum=20, value=30, step=1, label='Inference Steps') + octree_resolution = gr.Dropdown([256, 384, 512], value=256, label='Octree Resolution') + cfg_scale = gr.Number(value=5.5, label='Guidance Scale') + seed = gr.Slider(maximum=1e7, minimum=0, value=1234, label='Seed') + + with gr.Group(): + btn = gr.Button(value='Generate Shape Only', variant='primary') + btn_all = gr.Button(value='Generate Shape and Texture', variant='primary', visible=HAS_TEXTUREGEN) + + with gr.Group(): + file_out = gr.File(label="File", visible=False) + file_out2 = gr.File(label="File", visible=False) + + with gr.Column(scale=5): + with gr.Tabs(): + with gr.Tab('Generated Mesh') as mesh1: + html_output1 = gr.HTML(HTML_OUTPUT_PLACEHOLDER, label='Output') + with gr.Tab('Generated Textured Mesh') as mesh2: + html_output2 = gr.HTML(HTML_OUTPUT_PLACEHOLDER, label='Output') + + with gr.Column(scale=2): + with gr.Tabs() as gallery: + with gr.Tab('Image to 3D Gallery', id='tab_img_gallery') as tab_gi: + with gr.Row(): + gr.Examples(examples=example_is, inputs=[image], + label="Image Prompts", examples_per_page=18) + + with gr.Tab('Text to 3D Gallery', id='tab_txt_gallery', visible=HAS_T2I) as tab_gt: + with gr.Row(): + gr.Examples(examples=example_ts, inputs=[caption], + label="Text Prompts", examples_per_page=18) + + if not HAS_TEXTUREGEN: + gr.HTML(""") +
+ Warning: + Texture synthesis is disable due to missing requirements, + please install requirements following README.md to activate it. +
+ """) + if not args.enable_t23d: + gr.HTML(""" +
+ Warning: + Text to 3D is disable. To activate it, please run `python gradio_app.py --enable_t23d`. +
+ """) + + tab_gi.select(fn=lambda: gr.update(selected='tab_img_prompt'), outputs=tabs_prompt) + if HAS_T2I: + tab_gt.select(fn=lambda: gr.update(selected='tab_txt_prompt'), outputs=tabs_prompt) + + btn.click( + shape_generation, + inputs=[ + caption, + image, + num_steps, + cfg_scale, + seed, + octree_resolution, + check_box_rembg, + ], + outputs=[file_out, html_output1] + ).then( + lambda: gr.update(visible=True), + outputs=[file_out], + ) + + btn_all.click( + generation_all, + inputs=[ + caption, + image, + num_steps, + cfg_scale, + seed, + octree_resolution, + check_box_rembg, + ], + outputs=[file_out, file_out2, html_output1, html_output2] + ).then( + lambda: (gr.update(visible=True), gr.update(visible=True)), + outputs=[file_out, file_out2], + ) + + return demo + + +if __name__ == '__main__': + import argparse + + parser = argparse.ArgumentParser() + parser.add_argument('--port', type=int, default=8080) + parser.add_argument('--cache-path', type=str, default='gradio_cache') + parser.add_argument('--enable_t23d', action='store_true') + args = parser.parse_args() + + SAVE_DIR = args.cache_path + os.makedirs(SAVE_DIR, exist_ok=True) + + CURRENT_DIR = os.path.dirname(os.path.abspath(__file__)) + + HTML_OUTPUT_PLACEHOLDER = """ +
+ """ + + INPUT_MESH_HTML = """ +
+
+ """ + example_is = get_example_img_list() + example_ts = get_example_txt_list() + + try: + from hy3dgen.texgen import Hunyuan3DPaintPipeline + + texgen_worker = Hunyuan3DPaintPipeline.from_pretrained('tencent/Hunyuan3D-2') + HAS_TEXTUREGEN = True + except Exception as e: + print(e) + print("Failed to load texture generator.") + print('Please try to install requirements by following README.md') + HAS_TEXTUREGEN = False + + HAS_T2I = False + if args.enable_t23d: + from hy3dgen.text2image import HunyuanDiTPipeline + + t2i_worker = HunyuanDiTPipeline('Tencent-Hunyuan/HunyuanDiT-v1.1-Diffusers-Distilled') + HAS_T2I = True + + from hy3dgen.shapegen import FaceReducer, FloaterRemover, DegenerateFaceRemover, \ + Hunyuan3DDiTFlowMatchingPipeline + from hy3dgen.rembg import BackgroundRemover + + rmbg_worker = BackgroundRemover() + i23d_worker = Hunyuan3DDiTFlowMatchingPipeline.from_pretrained('tencent/Hunyuan3D-2') + floater_remove_worker = FloaterRemover() + degenerate_face_remove_worker = DegenerateFaceRemover() + face_reduce_worker = FaceReducer() + + # https://discuss.huggingface.co/t/how-to-serve-an-html-file/33921/2 + # create a FastAPI app + app = FastAPI() + # create a static directory to store the static files + static_dir = Path('./gradio_cache') + static_dir.mkdir(parents=True, exist_ok=True) + app.mount("/static", StaticFiles(directory=static_dir), name="static") + + demo = build_app() + app = gr.mount_gradio_app(app, demo, path="/") + uvicorn.run(app, host="0.0.0.0", port=args.port) diff --git a/.ipynb_checkpoints/hg_app-checkpoint.py b/.ipynb_checkpoints/hg_app-checkpoint.py new file mode 100644 index 0000000000000000000000000000000000000000..41d691e119a36d0d4460f9fa287e36e667bc3a5e --- /dev/null +++ b/.ipynb_checkpoints/hg_app-checkpoint.py @@ -0,0 +1,416 @@ +import os +import spaces +import subprocess +def install_cuda_toolkit(): + # CUDA_TOOLKIT_URL = "https://developer.download.nvidia.com/compute/cuda/11.8.0/local_installers/cuda_11.8.0_520.61.05_linux.run" + CUDA_TOOLKIT_URL = "https://developer.download.nvidia.com/compute/cuda/12.2.0/local_installers/cuda_12.2.0_535.54.03_linux.run" + CUDA_TOOLKIT_FILE = "/tmp/%s" % os.path.basename(CUDA_TOOLKIT_URL) + subprocess.call(["wget", "-q", CUDA_TOOLKIT_URL, "-O", CUDA_TOOLKIT_FILE]) + subprocess.call(["chmod", "+x", CUDA_TOOLKIT_FILE]) + subprocess.call([CUDA_TOOLKIT_FILE, "--silent", "--toolkit"]) + + os.environ["CUDA_HOME"] = "/usr/local/cuda" + os.environ["PATH"] = "%s/bin:%s" % (os.environ["CUDA_HOME"], os.environ["PATH"]) + os.environ["LD_LIBRARY_PATH"] = "%s/lib:%s" % ( + os.environ["CUDA_HOME"], + "" if "LD_LIBRARY_PATH" not in os.environ else os.environ["LD_LIBRARY_PATH"], + ) + # Fix: arch_list[-1] += '+PTX'; IndexError: list index out of range + os.environ["TORCH_CUDA_ARCH_LIST"] = "8.0;8.6" + +install_cuda_toolkit() +os.system("cd /home/user/app/hy3dgen/texgen/differentiable_renderer/ && bash compile_mesh_painter.sh") +os.system("cd /home/user/app/hy3dgen/texgen/custom_rasterizer && pip install .") + +import os +import shutil +import time +from glob import glob +from pathlib import Path + +import gradio as gr +import torch +import uvicorn +from fastapi import FastAPI +from fastapi.staticfiles import StaticFiles + + +def get_example_img_list(): + print('Loading example img list ...') + return sorted(glob('./assets/example_images/*.png')) + + +def get_example_txt_list(): + print('Loading example txt list ...') + txt_list = list() + for line in open('./assets/example_prompts.txt'): + txt_list.append(line.strip()) + return txt_list + + +def gen_save_folder(max_size=60): + os.makedirs(SAVE_DIR, exist_ok=True) + exists = set(int(_) for _ in os.listdir(SAVE_DIR) if not _.startswith(".")) + cur_id = min(set(range(max_size)) - exists) if len(exists) < max_size else -1 + if os.path.exists(f"{SAVE_DIR}/{(cur_id + 1) % max_size}"): + shutil.rmtree(f"{SAVE_DIR}/{(cur_id + 1) % max_size}") + print(f"remove {SAVE_DIR}/{(cur_id + 1) % max_size} success !!!") + save_folder = f"{SAVE_DIR}/{max(0, cur_id)}" + os.makedirs(save_folder, exist_ok=True) + print(f"mkdir {save_folder} suceess !!!") + return save_folder + + +def export_mesh(mesh, save_folder, textured=False): + if textured: + path = os.path.join(save_folder, f'textured_mesh.glb') + else: + path = os.path.join(save_folder, f'white_mesh.glb') + mesh.export(path, include_normals=textured) + return path + + +def build_model_viewer_html(save_folder, height=660, width=790, textured=False): + if textured: + related_path = f"./textured_mesh.glb" + template_name = './assets/modelviewer-textured-template.html' + output_html_path = os.path.join(save_folder, f'textured_mesh.html') + else: + related_path = f"./white_mesh.glb" + template_name = './assets/modelviewer-template.html' + output_html_path = os.path.join(save_folder, f'white_mesh.html') + + with open(os.path.join(CURRENT_DIR, template_name), 'r') as f: + template_html = f.read() + obj_html = f""" +
+ + +
+ """ + + with open(output_html_path, 'w') as f: + f.write(template_html.replace('', obj_html)) + + output_html_path = output_html_path.replace(SAVE_DIR + '/', '') + iframe_tag = f'' + print(f'Find html {output_html_path}, {os.path.exists(output_html_path)}') + + return f""" +
+ {iframe_tag} +
+ """ + +@spaces.GPU(duration=40) +def _gen_shape( + caption, + image, + steps=50, + guidance_scale=7.5, + seed=1234, + octree_resolution=256, + check_box_rembg=False, +): + if caption: print('prompt is', caption) + save_folder = gen_save_folder() + stats = {} + time_meta = {} + start_time_0 = time.time() + + if image is None: + start_time = time.time() + try: + image = t2i_worker(caption) + except Exception as e: + raise gr.Error(f"Text to 3D is disable. Please enable it by `python gradio_app.py --enable_t23d`.") + time_meta['text2image'] = time.time() - start_time + + image.save(os.path.join(save_folder, 'input.png')) + + print(image.mode) + if check_box_rembg or image.mode == "RGB": + start_time = time.time() + image = rmbg_worker(image.convert('RGB')) + time_meta['rembg'] = time.time() - start_time + + image.save(os.path.join(save_folder, 'rembg.png')) + + # image to white model + start_time = time.time() + + generator = torch.Generator() + generator = generator.manual_seed(int(seed)) + mesh = i23d_worker( + image=image, + num_inference_steps=steps, + guidance_scale=guidance_scale, + generator=generator, + octree_resolution=octree_resolution + )[0] + + mesh = FloaterRemover()(mesh) + mesh = DegenerateFaceRemover()(mesh) + mesh = FaceReducer()(mesh) + + stats['number_of_faces'] = mesh.faces.shape[0] + stats['number_of_vertices'] = mesh.vertices.shape[0] + + time_meta['image_to_textured_3d'] = {'total': time.time() - start_time} + time_meta['total'] = time.time() - start_time_0 + stats['time'] = time_meta + return mesh, save_folder + +@spaces.GPU(duration=60) +def generation_all( + caption, + image, + steps=50, + guidance_scale=7.5, + seed=1234, + octree_resolution=256, + check_box_rembg=False +): + mesh, save_folder = _gen_shape( + caption, + image, + steps=steps, + guidance_scale=guidance_scale, + seed=seed, + octree_resolution=octree_resolution, + check_box_rembg=check_box_rembg + ) + path = export_mesh(mesh, save_folder, textured=False) + model_viewer_html = build_model_viewer_html(save_folder, height=596, width=700) + + textured_mesh = texgen_worker(mesh, image) + path_textured = export_mesh(textured_mesh, save_folder, textured=True) + model_viewer_html_textured = build_model_viewer_html(save_folder, height=596, width=700, textured=True) + + return ( + gr.update(value=path, visible=True), + gr.update(value=path_textured, visible=True), + model_viewer_html, + model_viewer_html_textured, + ) + +@spaces.GPU(duration=40) +def shape_generation( + caption, + image, + steps=50, + guidance_scale=7.5, + seed=1234, + octree_resolution=256, + check_box_rembg=False, +): + mesh, save_folder = _gen_shape( + caption, + image, + steps=steps, + guidance_scale=guidance_scale, + seed=seed, + octree_resolution=octree_resolution, + check_box_rembg=check_box_rembg + ) + + path = export_mesh(mesh, save_folder, textured=False) + model_viewer_html = build_model_viewer_html(save_folder, height=596, width=700) + + return ( + gr.update(value=path, visible=True), + model_viewer_html, + ) + + +def build_app(): + title_html = """ +
+ + Hunyuan3D-2: Scaling Diffusion Models for High Resolution Textured 3D Assets Generation +
+
+ Tencent Hunyuan3D Team +
+
+ Github Page   + Homepage   + Technical Report   + Models   +
+ """ + + with gr.Blocks(theme=gr.themes.Base(), title='Hunyuan-3D-2.0') as demo: + gr.HTML(title_html) + + with gr.Row(): + with gr.Column(scale=2): + with gr.Tabs() as tabs_prompt: + with gr.Tab('Image Prompt', id='tab_img_prompt') as tab_ip: + image = gr.Image(label='Image', type='pil', image_mode='RGBA', height=290) + with gr.Row(): + check_box_rembg = gr.Checkbox(value=True, label='Remove Background') + + with gr.Tab('Text Prompt', id='tab_txt_prompt', visible=HAS_T2I) as tab_tp: + caption = gr.Textbox(label='Text Prompt', + placeholder='HunyuanDiT will be used to generate image.', + info='Example: A 3D model of a cute cat, white background') + + with gr.Accordion('Advanced Options', open=False): + num_steps = gr.Slider(maximum=50, minimum=20, value=30, step=1, label='Inference Steps') + octree_resolution = gr.Dropdown([256, 384, 512], value=256, label='Octree Resolution') + cfg_scale = gr.Number(value=5.5, label='Guidance Scale') + seed = gr.Slider(maximum=1e7, minimum=0, value=1234, label='Seed') + + with gr.Group(): + btn = gr.Button(value='Generate Shape Only', variant='primary') + btn_all = gr.Button(value='Generate Shape and Texture', variant='primary', visible=HAS_TEXTUREGEN) + + with gr.Group(): + file_out = gr.File(label="File", visible=False) + file_out2 = gr.File(label="File", visible=False) + + with gr.Column(scale=5): + with gr.Tabs(): + with gr.Tab('Generated Mesh') as mesh1: + html_output1 = gr.HTML(HTML_OUTPUT_PLACEHOLDER, label='Output') + with gr.Tab('Generated Textured Mesh') as mesh2: + html_output2 = gr.HTML(HTML_OUTPUT_PLACEHOLDER, label='Output') + + with gr.Column(scale=2): + with gr.Tabs() as gallery: + with gr.Tab('Image to 3D Gallery', id='tab_img_gallery') as tab_gi: + with gr.Row(): + gr.Examples(examples=example_is, inputs=[image], + label="Image Prompts", examples_per_page=18) + + with gr.Tab('Text to 3D Gallery', id='tab_txt_gallery', visible=HAS_T2I) as tab_gt: + with gr.Row(): + gr.Examples(examples=example_ts, inputs=[caption], + label="Text Prompts", examples_per_page=18) + + if not HAS_TEXTUREGEN: + gr.HTML(""") +
+ Warning: + Texture synthesis is disable due to missing requirements, + please install requirements following README.md to activate it. +
+ """) + if not args.enable_t23d: + gr.HTML(""" +
+ Warning: + Text to 3D is disable. To activate it, please run `python gradio_app.py --enable_t23d`. +
+ """) + + tab_gi.select(fn=lambda: gr.update(selected='tab_img_prompt'), outputs=tabs_prompt) + if HAS_T2I: + tab_gt.select(fn=lambda: gr.update(selected='tab_txt_prompt'), outputs=tabs_prompt) + + btn.click( + shape_generation, + inputs=[ + caption, + image, + num_steps, + cfg_scale, + seed, + octree_resolution, + check_box_rembg, + ], + outputs=[file_out, html_output1] + ).then( + lambda: gr.update(visible=True), + outputs=[file_out], + ) + + btn_all.click( + generation_all, + inputs=[ + caption, + image, + num_steps, + cfg_scale, + seed, + octree_resolution, + check_box_rembg, + ], + outputs=[file_out, file_out2, html_output1, html_output2] + ).then( + lambda: (gr.update(visible=True), gr.update(visible=True)), + outputs=[file_out, file_out2], + ) + + return demo + + +if __name__ == '__main__': + import argparse + + parser = argparse.ArgumentParser() + parser.add_argument('--port', type=int, default=8080) + parser.add_argument('--cache-path', type=str, default='gradio_cache') + parser.add_argument('--enable_t23d', default=True) + args = parser.parse_args() + + SAVE_DIR = args.cache_path + os.makedirs(SAVE_DIR, exist_ok=True) + + CURRENT_DIR = os.path.dirname(os.path.abspath(__file__)) + + HTML_OUTPUT_PLACEHOLDER = """ +
+ """ + + INPUT_MESH_HTML = """ +
+
+ """ + example_is = get_example_img_list() + example_ts = get_example_txt_list() + + try: + from hy3dgen.texgen import Hunyuan3DPaintPipeline + + texgen_worker = Hunyuan3DPaintPipeline.from_pretrained('tencent/Hunyuan3D-2') + HAS_TEXTUREGEN = True + except Exception as e: + print(e) + print("Failed to load texture generator.") + print('Please try to install requirements by following README.md') + HAS_TEXTUREGEN = False + + HAS_T2I = False + if args.enable_t23d: + from hy3dgen.text2image import HunyuanDiTPipeline + + t2i_worker = HunyuanDiTPipeline('Tencent-Hunyuan/HunyuanDiT-v1.1-Diffusers-Distilled') + HAS_T2I = True + + from hy3dgen.shapegen import FaceReducer, FloaterRemover, DegenerateFaceRemover, \ + Hunyuan3DDiTFlowMatchingPipeline + from hy3dgen.rembg import BackgroundRemover + + rmbg_worker = BackgroundRemover() + i23d_worker = Hunyuan3DDiTFlowMatchingPipeline.from_pretrained('tencent/Hunyuan3D-2') + floater_remove_worker = FloaterRemover() + degenerate_face_remove_worker = DegenerateFaceRemover() + face_reduce_worker = FaceReducer() + + # https://discuss.huggingface.co/t/how-to-serve-an-html-file/33921/2 + # create a FastAPI app + app = FastAPI() + # create a static directory to store the static files + static_dir = Path('./gradio_cache') + static_dir.mkdir(parents=True, exist_ok=True) + app.mount("/static", StaticFiles(directory=static_dir), name="static") + + demo = build_app() + app = gr.mount_gradio_app(app, demo, path="/") + uvicorn.run(app, host="0.0.0.0", port=7860) diff --git a/.ipynb_checkpoints/requirements-checkpoint.txt b/.ipynb_checkpoints/requirements-checkpoint.txt new file mode 100644 index 0000000000000000000000000000000000000000..434962eece0c4691e11152c73e063c8a130cdd99 --- /dev/null +++ b/.ipynb_checkpoints/requirements-checkpoint.txt @@ -0,0 +1,35 @@ +gradio_litmodel3d +ninja +pybind11 +trimesh +diffusers +tqdm +einops +opencv-python +numpy +torch +transformers +torchvision +torchaudio +ConfigArgParse +xatlas +scikit-learn +scikit-image +tritonclient +gevent +geventhttpclient +facexlib +accelerate +ipdb +omegaconf +pymeshlab +pytorch_lightning +taming-transformers-rom1504 +kornia +rembg +onnxruntime +pygltflib +sentencepiece +gradio +uvicorn +fastapi \ No newline at end of file diff --git a/assets/modelviewer-template.html b/assets/modelviewer-template.html index e061161a62636aa33561112face6cef52e7faa45..5a81985e2ccc115efdb2848523da731945bb0957 100644 --- a/assets/modelviewer-template.html +++ b/assets/modelviewer-template.html @@ -3,7 +3,8 @@ - + + + + + + + +
+ +
+ + +
+ +
+ + + \ No newline at end of file diff --git a/gradio_cache/0/white_mesh.glb b/gradio_cache/0/white_mesh.glb new file mode 100644 index 0000000000000000000000000000000000000000..a68b6d9faa1b0dbd24738d1d20c4393091afa89f Binary files /dev/null and b/gradio_cache/0/white_mesh.glb differ diff --git a/gradio_cache/0/white_mesh.html b/gradio_cache/0/white_mesh.html new file mode 100644 index 0000000000000000000000000000000000000000..f10c4e718eca4aba9dae6dab2452a2a038e6a354 --- /dev/null +++ b/gradio_cache/0/white_mesh.html @@ -0,0 +1,57 @@ + + + + + + + + + + + + + +
+ +
+ + +
+ +
+ + + \ No newline at end of file diff --git a/gradio_cache/1/input.png b/gradio_cache/1/input.png new file mode 100644 index 0000000000000000000000000000000000000000..5faffe818bb74bcfcf89e30ce615e2db8b76cf9f Binary files /dev/null and b/gradio_cache/1/input.png differ diff --git a/gradio_cache/1/rembg.png b/gradio_cache/1/rembg.png new file mode 100644 index 0000000000000000000000000000000000000000..51f6216129e50806f4dfcd8a91a724d29931a51f Binary files /dev/null and b/gradio_cache/1/rembg.png differ diff --git a/gradio_cache/1/textured_mesh.glb b/gradio_cache/1/textured_mesh.glb new file mode 100644 index 0000000000000000000000000000000000000000..0c19c14e97fdb4d51cc37036e022d8f2a433455c --- /dev/null +++ b/gradio_cache/1/textured_mesh.glb @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:00f04ece070c997b2b25bc9ad7674e737b8f42f533d68b732ed6ae459709fef2 +size 2464060 diff --git a/gradio_cache/1/textured_mesh.html b/gradio_cache/1/textured_mesh.html new file mode 100644 index 0000000000000000000000000000000000000000..8d8a5238be36e935f3658c84a97f00656a117cd8 --- /dev/null +++ b/gradio_cache/1/textured_mesh.html @@ -0,0 +1,40 @@ + + + + + + + + + + + +
+ +
+ + +
+ +
+ + + \ No newline at end of file diff --git a/gradio_cache/1/white_mesh.glb b/gradio_cache/1/white_mesh.glb new file mode 100644 index 0000000000000000000000000000000000000000..498e3413f62e9a98cc81721470c76f4e02858a64 Binary files /dev/null and b/gradio_cache/1/white_mesh.glb differ diff --git a/gradio_cache/1/white_mesh.html b/gradio_cache/1/white_mesh.html new file mode 100644 index 0000000000000000000000000000000000000000..f10c4e718eca4aba9dae6dab2452a2a038e6a354 --- /dev/null +++ b/gradio_cache/1/white_mesh.html @@ -0,0 +1,57 @@ + + + + + + + + + + + + + +
+ +
+ + +
+ +
+ + + \ No newline at end of file diff --git a/gradio_cache/2/input.png b/gradio_cache/2/input.png new file mode 100644 index 0000000000000000000000000000000000000000..44a343d31504f47ee35cc91896c354675f2814bd Binary files /dev/null and b/gradio_cache/2/input.png differ diff --git a/gradio_cache/2/rembg.png b/gradio_cache/2/rembg.png new file mode 100644 index 0000000000000000000000000000000000000000..1e68778d9ba5e626e658b50f6ea59071a173d4ab Binary files /dev/null and b/gradio_cache/2/rembg.png differ diff --git a/gradio_cache/2/white_mesh.glb b/gradio_cache/2/white_mesh.glb new file mode 100644 index 0000000000000000000000000000000000000000..4392a53bda036b7005e6be325a938e53199946c3 Binary files /dev/null and b/gradio_cache/2/white_mesh.glb differ diff --git a/gradio_cache/2/white_mesh.html b/gradio_cache/2/white_mesh.html new file mode 100644 index 0000000000000000000000000000000000000000..f10c4e718eca4aba9dae6dab2452a2a038e6a354 --- /dev/null +++ b/gradio_cache/2/white_mesh.html @@ -0,0 +1,57 @@ + + + + + + + + + + + + + +
+ +
+ + +
+ +
+ + + \ No newline at end of file diff --git a/gradio_cache/3/input.png b/gradio_cache/3/input.png new file mode 100644 index 0000000000000000000000000000000000000000..ff43e76643e7d80db8233a935624af1ecb64bdaa Binary files /dev/null and b/gradio_cache/3/input.png differ diff --git a/gradio_cache/3/rembg.png b/gradio_cache/3/rembg.png new file mode 100644 index 0000000000000000000000000000000000000000..9e21112fc522ffeaae02a72645d6114a158ae9fd Binary files /dev/null and b/gradio_cache/3/rembg.png differ diff --git a/gradio_cache/3/textured_mesh.glb b/gradio_cache/3/textured_mesh.glb new file mode 100644 index 0000000000000000000000000000000000000000..613df20542b42a1486c0843ab26e69138cfe8096 --- /dev/null +++ b/gradio_cache/3/textured_mesh.glb @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c93b387fb95e04b19f37be60d7b334702406f0e672df73fd5803cbd29d41af8b +size 2183696 diff --git a/gradio_cache/3/textured_mesh.html b/gradio_cache/3/textured_mesh.html new file mode 100644 index 0000000000000000000000000000000000000000..8d8a5238be36e935f3658c84a97f00656a117cd8 --- /dev/null +++ b/gradio_cache/3/textured_mesh.html @@ -0,0 +1,40 @@ + + + + + + + + + + + +
+ +
+ + +
+ +
+ + + \ No newline at end of file diff --git a/gradio_cache/3/white_mesh.glb b/gradio_cache/3/white_mesh.glb new file mode 100644 index 0000000000000000000000000000000000000000..f74d0277469124044e94af5f89fb6313799ac077 Binary files /dev/null and b/gradio_cache/3/white_mesh.glb differ diff --git a/gradio_cache/3/white_mesh.html b/gradio_cache/3/white_mesh.html new file mode 100644 index 0000000000000000000000000000000000000000..f10c4e718eca4aba9dae6dab2452a2a038e6a354 --- /dev/null +++ b/gradio_cache/3/white_mesh.html @@ -0,0 +1,57 @@ + + + + + + + + + + + + + +
+ +
+ + +
+ +
+ + + \ No newline at end of file diff --git a/gradio_cache/4/input.png b/gradio_cache/4/input.png new file mode 100644 index 0000000000000000000000000000000000000000..4b84b54b925924f2ba2daa0c504dfb63c84fb8e5 Binary files /dev/null and b/gradio_cache/4/input.png differ diff --git a/gradio_cache/4/rembg.png b/gradio_cache/4/rembg.png new file mode 100644 index 0000000000000000000000000000000000000000..a2b9b6981b7f71680077b2b29f529b05ca30b6f2 Binary files /dev/null and b/gradio_cache/4/rembg.png differ diff --git a/gradio_cache/4/textured_mesh.glb b/gradio_cache/4/textured_mesh.glb new file mode 100644 index 0000000000000000000000000000000000000000..b046101ce7c437440c247c62cd99c833a317859f --- /dev/null +++ b/gradio_cache/4/textured_mesh.glb @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9a3572334508a5661c1ff75f4ad228c6d1b40b5e46b7eded9fc00759d738ec13 +size 2472736 diff --git a/gradio_cache/4/textured_mesh.html b/gradio_cache/4/textured_mesh.html new file mode 100644 index 0000000000000000000000000000000000000000..8d8a5238be36e935f3658c84a97f00656a117cd8 --- /dev/null +++ b/gradio_cache/4/textured_mesh.html @@ -0,0 +1,40 @@ + + + + + + + + + + + +
+ +
+ + +
+ +
+ + + \ No newline at end of file diff --git a/gradio_cache/4/white_mesh.glb b/gradio_cache/4/white_mesh.glb new file mode 100644 index 0000000000000000000000000000000000000000..50e92ca5dbef554bce53c445b2f4b2616740e4d5 Binary files /dev/null and b/gradio_cache/4/white_mesh.glb differ diff --git a/gradio_cache/4/white_mesh.html b/gradio_cache/4/white_mesh.html new file mode 100644 index 0000000000000000000000000000000000000000..f10c4e718eca4aba9dae6dab2452a2a038e6a354 --- /dev/null +++ b/gradio_cache/4/white_mesh.html @@ -0,0 +1,57 @@ + + + + + + + + + + + + + +
+ +
+ + +
+ +
+ + + \ No newline at end of file diff --git a/hg_app.py b/hg_app.py index 3fbed804023048c54792ee8f81913a3aaa8dc253..e1efd46f55fa90e17b3fc6879d2ee5cf66cd339a 100644 --- a/hg_app.py +++ b/hg_app.py @@ -431,6 +431,6 @@ if __name__ == '__main__': app.mount("/static", StaticFiles(directory=static_dir), name="static") demo = build_app() - demo.queue(max_size=1) + demo.queue(max_size=10) app = gr.mount_gradio_app(app, demo, path="/") uvicorn.run(app, host=IP, port=PORT) diff --git a/hy3dgen/.ipynb_checkpoints/text2image-checkpoint.py b/hy3dgen/.ipynb_checkpoints/text2image-checkpoint.py new file mode 100644 index 0000000000000000000000000000000000000000..ec2c1cde6b416aa787fc8d0ce1575118a58564b4 --- /dev/null +++ b/hy3dgen/.ipynb_checkpoints/text2image-checkpoint.py @@ -0,0 +1,92 @@ +# Open Source Model Licensed under the Apache License Version 2.0 +# and Other Licenses of the Third-Party Components therein: +# The below Model in this distribution may have been modified by THL A29 Limited +# ("Tencent Modifications"). All Tencent Modifications are Copyright (C) 2024 THL A29 Limited. + +# Copyright (C) 2024 THL A29 Limited, a Tencent company. All rights reserved. +# The below software and/or models in this distribution may have been +# modified by THL A29 Limited ("Tencent Modifications"). +# All Tencent Modifications are Copyright (C) THL A29 Limited. + +# Hunyuan 3D is licensed under the TENCENT HUNYUAN NON-COMMERCIAL LICENSE AGREEMENT +# except for the third-party components listed below. +# Hunyuan 3D does not impose any additional limitations beyond what is outlined +# in the repsective licenses of these third-party components. +# Users must comply with all terms and conditions of original licenses of these third-party +# components and must ensure that the usage of the third party components adheres to +# all relevant laws and regulations. + +# For avoidance of doubts, Hunyuan 3D means the large language models and +# their software and algorithms, including trained model weights, parameters (including +# optimizer states), machine-learning model code, inference-enabling code, training-enabling code, +# fine-tuning enabling code and other elements of the foregoing made publicly available +# by Tencent in accordance with TENCENT HUNYUAN COMMUNITY LICENSE AGREEMENT. + + +import os +import random + +import numpy as np +import torch +from diffusers import AutoPipelineForText2Image + + +def seed_everything(seed): + random.seed(seed) + np.random.seed(seed) + torch.manual_seed(seed) + os.environ["PL_GLOBAL_SEED"] = str(seed) + + +class HunyuanDiTPipeline: + def __init__( + self, + model_path="Tencent-Hunyuan/HunyuanDiT-v1.1-Diffusers-Distilled", + device='cuda' + ): + self.device = device + self.pipe = AutoPipelineForText2Image.from_pretrained( + model_path, + torch_dtype=torch.float16, + enable_pag=True, + pag_applied_layers=["blocks.(16|17|18|19)"] + ).to(device) + self.pos_txt = ",白色背景,3D风格,最佳质量" + self.neg_txt = "文本,特写,裁剪,出框,最差质量,低质量,JPEG伪影,PGLY,重复,病态," \ + "残缺,多余的手指,变异的手,画得不好的手,画得不好的脸,变异,畸形,模糊,脱水,糟糕的解剖学," \ + "糟糕的比例,多余的肢体,克隆的脸,毁容,恶心的比例,畸形的肢体,缺失的手臂,缺失的腿," \ + "额外的手臂,额外的腿,融合的手指,手指太多,长脖子" + + def compile(self): + # accelarate hunyuan-dit transformer,first inference will cost long time + torch.set_float32_matmul_precision('high') + self.pipe.transformer = torch.compile(self.pipe.transformer, fullgraph=True) + # self.pipe.vae.decode = torch.compile(self.pipe.vae.decode, fullgraph=True) + generator = torch.Generator(device=self.pipe.device) # infer once for hot-start + out_img = self.pipe( + prompt='美少女战士', + negative_prompt='模糊', + num_inference_steps=25, + pag_scale=1.3, + width=1024, + height=1024, + generator=generator, + return_dict=False + )[0][0] + + @torch.no_grad() + def __call__(self, prompt, seed=0): + seed_everything(seed) + generator = torch.Generator(device=self.pipe.device) + generator = generator.manual_seed(int(seed)) + out_img = self.pipe( + prompt=self.pos_txt+prompt, + negative_prompt=self.neg_txt, + num_inference_steps=25, + pag_scale=1.3, + width=1024, + height=1024, + generator=generator, + return_dict=False + )[0][0] + return out_img diff --git a/hy3dgen/__pycache__/__init__.cpython-311.pyc b/hy3dgen/__pycache__/__init__.cpython-311.pyc new file mode 100644 index 0000000000000000000000000000000000000000..640f987956367f2fd36fa90e7ffef12765dca41e Binary files /dev/null and b/hy3dgen/__pycache__/__init__.cpython-311.pyc differ diff --git a/hy3dgen/__pycache__/rembg.cpython-311.pyc b/hy3dgen/__pycache__/rembg.cpython-311.pyc new file mode 100644 index 0000000000000000000000000000000000000000..f2a313acc10606f269690d10c35e856c891db8ac Binary files /dev/null and b/hy3dgen/__pycache__/rembg.cpython-311.pyc differ diff --git a/hy3dgen/__pycache__/text2image.cpython-311.pyc b/hy3dgen/__pycache__/text2image.cpython-311.pyc new file mode 100644 index 0000000000000000000000000000000000000000..0b1e6dfb994a054c0f02db328680000235b3d800 Binary files /dev/null and b/hy3dgen/__pycache__/text2image.cpython-311.pyc differ diff --git a/hy3dgen/shapegen/__pycache__/__init__.cpython-311.pyc b/hy3dgen/shapegen/__pycache__/__init__.cpython-311.pyc new file mode 100644 index 0000000000000000000000000000000000000000..036ef7d0051f38de016075fe3834af4c6674993d Binary files /dev/null and b/hy3dgen/shapegen/__pycache__/__init__.cpython-311.pyc differ diff --git a/hy3dgen/shapegen/__pycache__/pipelines.cpython-311.pyc b/hy3dgen/shapegen/__pycache__/pipelines.cpython-311.pyc new file mode 100644 index 0000000000000000000000000000000000000000..72479bc323a80312c036ac783e665bc703f651ad Binary files /dev/null and b/hy3dgen/shapegen/__pycache__/pipelines.cpython-311.pyc differ diff --git a/hy3dgen/shapegen/__pycache__/postprocessors.cpython-311.pyc b/hy3dgen/shapegen/__pycache__/postprocessors.cpython-311.pyc new file mode 100644 index 0000000000000000000000000000000000000000..fb9bb579e40eccdc4ccebb63df6595698b08cd51 Binary files /dev/null and b/hy3dgen/shapegen/__pycache__/postprocessors.cpython-311.pyc differ diff --git a/hy3dgen/shapegen/__pycache__/preprocessors.cpython-311.pyc b/hy3dgen/shapegen/__pycache__/preprocessors.cpython-311.pyc new file mode 100644 index 0000000000000000000000000000000000000000..8b68cf6e7f20e7ab2908ea3bdc5581a571ffae99 Binary files /dev/null and b/hy3dgen/shapegen/__pycache__/preprocessors.cpython-311.pyc differ diff --git a/hy3dgen/shapegen/__pycache__/schedulers.cpython-311.pyc b/hy3dgen/shapegen/__pycache__/schedulers.cpython-311.pyc new file mode 100644 index 0000000000000000000000000000000000000000..c846e3a07ef2098cf479c04a16b3b52e8db67437 Binary files /dev/null and b/hy3dgen/shapegen/__pycache__/schedulers.cpython-311.pyc differ diff --git a/hy3dgen/shapegen/models/__pycache__/__init__.cpython-311.pyc b/hy3dgen/shapegen/models/__pycache__/__init__.cpython-311.pyc new file mode 100644 index 0000000000000000000000000000000000000000..977f2ff85f3e71a67a8940e007df240ff126c2b8 Binary files /dev/null and b/hy3dgen/shapegen/models/__pycache__/__init__.cpython-311.pyc differ diff --git a/hy3dgen/shapegen/models/__pycache__/conditioner.cpython-311.pyc b/hy3dgen/shapegen/models/__pycache__/conditioner.cpython-311.pyc new file mode 100644 index 0000000000000000000000000000000000000000..bd0f64a4a5d844fb3cfa75b8931dfd71428df279 Binary files /dev/null and b/hy3dgen/shapegen/models/__pycache__/conditioner.cpython-311.pyc differ diff --git a/hy3dgen/shapegen/models/__pycache__/hunyuan3ddit.cpython-311.pyc b/hy3dgen/shapegen/models/__pycache__/hunyuan3ddit.cpython-311.pyc new file mode 100644 index 0000000000000000000000000000000000000000..66820057d184281798ca0ce8b64aa14cbcb815ba Binary files /dev/null and b/hy3dgen/shapegen/models/__pycache__/hunyuan3ddit.cpython-311.pyc differ diff --git a/hy3dgen/shapegen/models/__pycache__/vae.cpython-311.pyc b/hy3dgen/shapegen/models/__pycache__/vae.cpython-311.pyc new file mode 100644 index 0000000000000000000000000000000000000000..2f18b352b2ebc43b49e14ba4abccd0de94dafe1d Binary files /dev/null and b/hy3dgen/shapegen/models/__pycache__/vae.cpython-311.pyc differ diff --git a/hy3dgen/texgen/__pycache__/__init__.cpython-311.pyc b/hy3dgen/texgen/__pycache__/__init__.cpython-311.pyc new file mode 100644 index 0000000000000000000000000000000000000000..43d1f1dde2a15273e160f237ff69923d3abb93da Binary files /dev/null and b/hy3dgen/texgen/__pycache__/__init__.cpython-311.pyc differ diff --git a/hy3dgen/texgen/__pycache__/pipelines.cpython-311.pyc b/hy3dgen/texgen/__pycache__/pipelines.cpython-311.pyc new file mode 100644 index 0000000000000000000000000000000000000000..297c9aa6d1597b846cf97d0c7104aa84b6a0e31e Binary files /dev/null and b/hy3dgen/texgen/__pycache__/pipelines.cpython-311.pyc differ diff --git a/hy3dgen/texgen/custom_rasterizer/build/lib.linux-x86_64-cpython-311/custom_rasterizer/__init__.py b/hy3dgen/texgen/custom_rasterizer/build/lib.linux-x86_64-cpython-311/custom_rasterizer/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..df40dcc8d4819eb903263ff1faf70ce902eb7e07 --- /dev/null +++ b/hy3dgen/texgen/custom_rasterizer/build/lib.linux-x86_64-cpython-311/custom_rasterizer/__init__.py @@ -0,0 +1,32 @@ +# Open Source Model Licensed under the Apache License Version 2.0 +# and Other Licenses of the Third-Party Components therein: +# The below Model in this distribution may have been modified by THL A29 Limited +# ("Tencent Modifications"). All Tencent Modifications are Copyright (C) 2024 THL A29 Limited. + +# Copyright (C) 2024 THL A29 Limited, a Tencent company. All rights reserved. +# The below software and/or models in this distribution may have been +# modified by THL A29 Limited ("Tencent Modifications"). +# All Tencent Modifications are Copyright (C) THL A29 Limited. + +# Hunyuan 3D is licensed under the TENCENT HUNYUAN NON-COMMERCIAL LICENSE AGREEMENT +# except for the third-party components listed below. +# Hunyuan 3D does not impose any additional limitations beyond what is outlined +# in the repsective licenses of these third-party components. +# Users must comply with all terms and conditions of original licenses of these third-party +# components and must ensure that the usage of the third party components adheres to +# all relevant laws and regulations. + +# For avoidance of doubts, Hunyuan 3D means the large language models and +# their software and algorithms, including trained model weights, parameters (including +# optimizer states), machine-learning model code, inference-enabling code, training-enabling code, +# fine-tuning enabling code and other elements of the foregoing made publicly available +# by Tencent in accordance with TENCENT HUNYUAN COMMUNITY LICENSE AGREEMENT. + +''' +from .hierarchy import BuildHierarchy, BuildHierarchyWithColor +from .io_obj import LoadObj, LoadObjWithTexture +from .render import rasterize, interpolate +''' +from .io_glb import * +from .io_obj import * +from .render import * diff --git a/hy3dgen/texgen/custom_rasterizer/build/lib.linux-x86_64-cpython-311/custom_rasterizer/io_glb.py b/hy3dgen/texgen/custom_rasterizer/build/lib.linux-x86_64-cpython-311/custom_rasterizer/io_glb.py new file mode 100644 index 0000000000000000000000000000000000000000..c5d7dc8c6127e62848dda8e79fdc281c5a7b42cb --- /dev/null +++ b/hy3dgen/texgen/custom_rasterizer/build/lib.linux-x86_64-cpython-311/custom_rasterizer/io_glb.py @@ -0,0 +1,248 @@ +# Open Source Model Licensed under the Apache License Version 2.0 +# and Other Licenses of the Third-Party Components therein: +# The below Model in this distribution may have been modified by THL A29 Limited +# ("Tencent Modifications"). All Tencent Modifications are Copyright (C) 2024 THL A29 Limited. + +# Copyright (C) 2024 THL A29 Limited, a Tencent company. All rights reserved. +# The below software and/or models in this distribution may have been +# modified by THL A29 Limited ("Tencent Modifications"). +# All Tencent Modifications are Copyright (C) THL A29 Limited. + +# Hunyuan 3D is licensed under the TENCENT HUNYUAN NON-COMMERCIAL LICENSE AGREEMENT +# except for the third-party components listed below. +# Hunyuan 3D does not impose any additional limitations beyond what is outlined +# in the repsective licenses of these third-party components. +# Users must comply with all terms and conditions of original licenses of these third-party +# components and must ensure that the usage of the third party components adheres to +# all relevant laws and regulations. + +# For avoidance of doubts, Hunyuan 3D means the large language models and +# their software and algorithms, including trained model weights, parameters (including +# optimizer states), machine-learning model code, inference-enabling code, training-enabling code, +# fine-tuning enabling code and other elements of the foregoing made publicly available +# by Tencent in accordance with TENCENT HUNYUAN COMMUNITY LICENSE AGREEMENT. + +import base64 +import io +import os + +import numpy as np +from PIL import Image as PILImage +from pygltflib import GLTF2 +from scipy.spatial.transform import Rotation as R + + +# Function to extract buffer data +def get_buffer_data(gltf, buffer_view): + buffer = gltf.buffers[buffer_view.buffer] + buffer_data = gltf.get_data_from_buffer_uri(buffer.uri) + byte_offset = buffer_view.byteOffset if buffer_view.byteOffset else 0 + byte_length = buffer_view.byteLength + return buffer_data[byte_offset:byte_offset + byte_length] + + +# Function to extract attribute data +def get_attribute_data(gltf, accessor_index): + accessor = gltf.accessors[accessor_index] + buffer_view = gltf.bufferViews[accessor.bufferView] + buffer_data = get_buffer_data(gltf, buffer_view) + + comptype = {5120: np.int8, 5121: np.uint8, 5122: np.int16, 5123: np.uint16, 5125: np.uint32, 5126: np.float32} + dtype = comptype[accessor.componentType] + + t2n = {'SCALAR': 1, 'VEC2': 2, 'VEC3': 3, 'VEC4': 4, 'MAT2': 4, 'MAT3': 9, 'MAT4': 16} + num_components = t2n[accessor.type] + + # Calculate the correct slice of data + byte_offset = accessor.byteOffset if accessor.byteOffset else 0 + byte_stride = buffer_view.byteStride if buffer_view.byteStride else num_components * np.dtype(dtype).itemsize + count = accessor.count + + # Extract the attribute data + attribute_data = np.zeros((count, num_components), dtype=dtype) + for i in range(count): + start = byte_offset + i * byte_stride + end = start + num_components * np.dtype(dtype).itemsize + attribute_data[i] = np.frombuffer(buffer_data[start:end], dtype=dtype) + + return attribute_data + + +# Function to extract image data +def get_image_data(gltf, image, folder): + if image.uri: + if image.uri.startswith('data:'): + # Data URI + header, encoded = image.uri.split(',', 1) + data = base64.b64decode(encoded) + else: + # External file + fn = image.uri + if not os.path.isabs(fn): + fn = folder + '/' + fn + with open(fn, 'rb') as f: + data = f.read() + else: + buffer_view = gltf.bufferViews[image.bufferView] + data = get_buffer_data(gltf, buffer_view) + return data + + +# Function to convert triangle strip to triangles +def convert_triangle_strip_to_triangles(indices): + triangles = [] + for i in range(len(indices) - 2): + if i % 2 == 0: + triangles.append([indices[i], indices[i + 1], indices[i + 2]]) + else: + triangles.append([indices[i], indices[i + 2], indices[i + 1]]) + return np.array(triangles).reshape(-1, 3) + + +# Function to convert triangle fan to triangles +def convert_triangle_fan_to_triangles(indices): + triangles = [] + for i in range(1, len(indices) - 1): + triangles.append([indices[0], indices[i], indices[i + 1]]) + return np.array(triangles).reshape(-1, 3) + + +# Function to get the transformation matrix from a node +def get_node_transform(node): + if node.matrix: + return np.array(node.matrix).reshape(4, 4).T + else: + T = np.eye(4) + if node.translation: + T[:3, 3] = node.translation + if node.rotation: + R_mat = R.from_quat(node.rotation).as_matrix() + T[:3, :3] = R_mat + if node.scale: + S = np.diag(node.scale + [1]) + T = T @ S + return T + + +def get_world_transform(gltf, node_index, parents, world_transforms): + if parents[node_index] == -2: + return world_transforms[node_index] + + node = gltf.nodes[node_index] + if parents[node_index] == -1: + world_transforms[node_index] = get_node_transform(node) + parents[node_index] = -2 + return world_transforms[node_index] + + parent_index = parents[node_index] + parent_transform = get_world_transform(gltf, parent_index, parents, world_transforms) + world_transforms[node_index] = parent_transform @ get_node_transform(node) + parents[node_index] = -2 + return world_transforms[node_index] + + +def LoadGlb(path): + # Load the GLB file using pygltflib + gltf = GLTF2().load(path) + + primitives = [] + images = {} + # Iterate through the meshes in the GLB file + + world_transforms = [np.identity(4) for i in range(len(gltf.nodes))] + parents = [-1 for i in range(len(gltf.nodes))] + for node_index, node in enumerate(gltf.nodes): + for idx in node.children: + parents[idx] = node_index + # for i in range(len(gltf.nodes)): + # get_world_transform(gltf, i, parents, world_transform) + + for node_index, node in enumerate(gltf.nodes): + if node.mesh is not None: + world_transform = get_world_transform(gltf, node_index, parents, world_transforms) + # Iterate through the primitives in the mesh + mesh = gltf.meshes[node.mesh] + for primitive in mesh.primitives: + # Access the attributes of the primitive + attributes = primitive.attributes.__dict__ + mode = primitive.mode if primitive.mode is not None else 4 # Default to TRIANGLES + result = {} + if primitive.indices is not None: + indices = get_attribute_data(gltf, primitive.indices) + if mode == 4: # TRIANGLES + face_indices = indices.reshape(-1, 3) + elif mode == 5: # TRIANGLE_STRIP + face_indices = convert_triangle_strip_to_triangles(indices) + elif mode == 6: # TRIANGLE_FAN + face_indices = convert_triangle_fan_to_triangles(indices) + else: + continue + result['F'] = face_indices + + # Extract vertex positions + if 'POSITION' in attributes and attributes['POSITION'] is not None: + positions = get_attribute_data(gltf, attributes['POSITION']) + # Apply the world transformation to the positions + positions_homogeneous = np.hstack([positions, np.ones((positions.shape[0], 1))]) + transformed_positions = (world_transform @ positions_homogeneous.T).T[:, :3] + result['V'] = transformed_positions + + # Extract vertex colors + if 'COLOR_0' in attributes and attributes['COLOR_0'] is not None: + colors = get_attribute_data(gltf, attributes['COLOR_0']) + if colors.shape[-1] > 3: + colors = colors[..., :3] + result['VC'] = colors + + # Extract UVs + if 'TEXCOORD_0' in attributes and not attributes['TEXCOORD_0'] is None: + uvs = get_attribute_data(gltf, attributes['TEXCOORD_0']) + result['UV'] = uvs + + if primitive.material is not None: + material = gltf.materials[primitive.material] + if material.pbrMetallicRoughness is not None and material.pbrMetallicRoughness.baseColorTexture is not None: + texture_index = material.pbrMetallicRoughness.baseColorTexture.index + texture = gltf.textures[texture_index] + image_index = texture.source + if not image_index in images: + image = gltf.images[image_index] + image_data = get_image_data(gltf, image, os.path.dirname(path)) + pil_image = PILImage.open(io.BytesIO(image_data)) + if pil_image.mode != 'RGB': + pil_image = pil_image.convert('RGB') + images[image_index] = pil_image + result['TEX'] = image_index + elif material.emissiveTexture is not None: + texture_index = material.emissiveTexture.index + texture = gltf.textures[texture_index] + image_index = texture.source + if not image_index in images: + image = gltf.images[image_index] + image_data = get_image_data(gltf, image, os.path.dirname(path)) + pil_image = PILImage.open(io.BytesIO(image_data)) + if pil_image.mode != 'RGB': + pil_image = pil_image.convert('RGB') + images[image_index] = pil_image + result['TEX'] = image_index + else: + if material.pbrMetallicRoughness is not None: + base_color = material.pbrMetallicRoughness.baseColorFactor + else: + base_color = np.array([0.8, 0.8, 0.8], dtype=np.float32) + result['MC'] = base_color + + primitives.append(result) + + return primitives, images + + +def RotatePrimitives(primitives, transform): + for i in range(len(primitives)): + if 'V' in primitives[i]: + primitives[i]['V'] = primitives[i]['V'] @ transform.T + + +if __name__ == '__main__': + path = 'data/test.glb' + LoadGlb(path) diff --git a/hy3dgen/texgen/custom_rasterizer/build/lib.linux-x86_64-cpython-311/custom_rasterizer/io_obj.py b/hy3dgen/texgen/custom_rasterizer/build/lib.linux-x86_64-cpython-311/custom_rasterizer/io_obj.py new file mode 100644 index 0000000000000000000000000000000000000000..a72c478d8efcb9a3d71a67ce5f167559ef76b922 --- /dev/null +++ b/hy3dgen/texgen/custom_rasterizer/build/lib.linux-x86_64-cpython-311/custom_rasterizer/io_obj.py @@ -0,0 +1,76 @@ +# Open Source Model Licensed under the Apache License Version 2.0 +# and Other Licenses of the Third-Party Components therein: +# The below Model in this distribution may have been modified by THL A29 Limited +# ("Tencent Modifications"). All Tencent Modifications are Copyright (C) 2024 THL A29 Limited. + +# Copyright (C) 2024 THL A29 Limited, a Tencent company. All rights reserved. +# The below software and/or models in this distribution may have been +# modified by THL A29 Limited ("Tencent Modifications"). +# All Tencent Modifications are Copyright (C) THL A29 Limited. + +# Hunyuan 3D is licensed under the TENCENT HUNYUAN NON-COMMERCIAL LICENSE AGREEMENT +# except for the third-party components listed below. +# Hunyuan 3D does not impose any additional limitations beyond what is outlined +# in the repsective licenses of these third-party components. +# Users must comply with all terms and conditions of original licenses of these third-party +# components and must ensure that the usage of the third party components adheres to +# all relevant laws and regulations. + +# For avoidance of doubts, Hunyuan 3D means the large language models and +# their software and algorithms, including trained model weights, parameters (including +# optimizer states), machine-learning model code, inference-enabling code, training-enabling code, +# fine-tuning enabling code and other elements of the foregoing made publicly available +# by Tencent in accordance with TENCENT HUNYUAN COMMUNITY LICENSE AGREEMENT. + +import cv2 +import numpy as np + + +def LoadObj(fn): + lines = [l.strip() for l in open(fn)] + vertices = [] + faces = [] + for l in lines: + words = [w for w in l.split(' ') if w != ''] + if len(words) == 0: + continue + if words[0] == 'v': + v = [float(words[i]) for i in range(1, 4)] + vertices.append(v) + elif words[0] == 'f': + f = [int(words[i]) - 1 for i in range(1, 4)] + faces.append(f) + + return np.array(vertices).astype('float32'), np.array(faces).astype('int32') + + +def LoadObjWithTexture(fn, tex_fn): + lines = [l.strip() for l in open(fn)] + vertices = [] + vertex_textures = [] + faces = [] + face_textures = [] + for l in lines: + words = [w for w in l.split(' ') if w != ''] + if len(words) == 0: + continue + if words[0] == 'v': + v = [float(words[i]) for i in range(1, len(words))] + vertices.append(v) + elif words[0] == 'vt': + v = [float(words[i]) for i in range(1, len(words))] + vertex_textures.append(v) + elif words[0] == 'f': + f = [] + ft = [] + for i in range(1, len(words)): + t = words[i].split('/') + f.append(int(t[0]) - 1) + ft.append(int(t[1]) - 1) + for i in range(2, len(f)): + faces.append([f[0], f[i - 1], f[i]]) + face_textures.append([ft[0], ft[i - 1], ft[i]]) + + tex_image = cv2.cvtColor(cv2.imread(tex_fn), cv2.COLOR_BGR2RGB) + return np.array(vertices).astype('float32'), np.array(vertex_textures).astype('float32'), np.array(faces).astype( + 'int32'), np.array(face_textures).astype('int32'), tex_image diff --git a/hy3dgen/texgen/custom_rasterizer/build/lib.linux-x86_64-cpython-311/custom_rasterizer/render.py b/hy3dgen/texgen/custom_rasterizer/build/lib.linux-x86_64-cpython-311/custom_rasterizer/render.py new file mode 100644 index 0000000000000000000000000000000000000000..743d4aac4da9e1e18374ce712ac24d19e6788870 --- /dev/null +++ b/hy3dgen/texgen/custom_rasterizer/build/lib.linux-x86_64-cpython-311/custom_rasterizer/render.py @@ -0,0 +1,41 @@ +# Open Source Model Licensed under the Apache License Version 2.0 +# and Other Licenses of the Third-Party Components therein: +# The below Model in this distribution may have been modified by THL A29 Limited +# ("Tencent Modifications"). All Tencent Modifications are Copyright (C) 2024 THL A29 Limited. + +# Copyright (C) 2024 THL A29 Limited, a Tencent company. All rights reserved. +# The below software and/or models in this distribution may have been +# modified by THL A29 Limited ("Tencent Modifications"). +# All Tencent Modifications are Copyright (C) THL A29 Limited. + +# Hunyuan 3D is licensed under the TENCENT HUNYUAN NON-COMMERCIAL LICENSE AGREEMENT +# except for the third-party components listed below. +# Hunyuan 3D does not impose any additional limitations beyond what is outlined +# in the repsective licenses of these third-party components. +# Users must comply with all terms and conditions of original licenses of these third-party +# components and must ensure that the usage of the third party components adheres to +# all relevant laws and regulations. + +# For avoidance of doubts, Hunyuan 3D means the large language models and +# their software and algorithms, including trained model weights, parameters (including +# optimizer states), machine-learning model code, inference-enabling code, training-enabling code, +# fine-tuning enabling code and other elements of the foregoing made publicly available +# by Tencent in accordance with TENCENT HUNYUAN COMMUNITY LICENSE AGREEMENT. + +import custom_rasterizer_kernel +import torch + + +def rasterize(pos, tri, resolution, clamp_depth=torch.zeros(0), use_depth_prior=0): + assert (pos.device == tri.device) + findices, barycentric = custom_rasterizer_kernel.rasterize_image(pos[0], tri, clamp_depth, resolution[1], + resolution[0], 1e-6, use_depth_prior) + return findices, barycentric + + +def interpolate(col, findices, barycentric, tri): + f = findices - 1 + (findices == 0) + vcol = col[0, tri.long()[f.long()]] + result = barycentric.view(*barycentric.shape, 1) * vcol + result = torch.sum(result, axis=-2) + return result.view(1, *result.shape) diff --git a/hy3dgen/texgen/custom_rasterizer/build/lib.linux-x86_64-cpython-311/custom_rasterizer_kernel.cpython-311-x86_64-linux-gnu.so b/hy3dgen/texgen/custom_rasterizer/build/lib.linux-x86_64-cpython-311/custom_rasterizer_kernel.cpython-311-x86_64-linux-gnu.so new file mode 100644 index 0000000000000000000000000000000000000000..fae9d0b229821dfe744e1b7b70250848eaa60797 Binary files /dev/null and b/hy3dgen/texgen/custom_rasterizer/build/lib.linux-x86_64-cpython-311/custom_rasterizer_kernel.cpython-311-x86_64-linux-gnu.so differ diff --git a/hy3dgen/texgen/custom_rasterizer/build/temp.linux-x86_64-cpython-311/.ninja_deps b/hy3dgen/texgen/custom_rasterizer/build/temp.linux-x86_64-cpython-311/.ninja_deps new file mode 100644 index 0000000000000000000000000000000000000000..0227139e664b127ab09b323a310ef5b67e038309 Binary files /dev/null and b/hy3dgen/texgen/custom_rasterizer/build/temp.linux-x86_64-cpython-311/.ninja_deps differ diff --git a/hy3dgen/texgen/custom_rasterizer/build/temp.linux-x86_64-cpython-311/.ninja_log b/hy3dgen/texgen/custom_rasterizer/build/temp.linux-x86_64-cpython-311/.ninja_log new file mode 100644 index 0000000000000000000000000000000000000000..961073a8816d177520ae1b8a655f413b83678c12 --- /dev/null +++ b/hy3dgen/texgen/custom_rasterizer/build/temp.linux-x86_64-cpython-311/.ninja_log @@ -0,0 +1,4 @@ +# ninja log v5 +5 12944 1737469910283155280 /apdcephfs_cq5/share_300600172/huiwenshi/repos/Hunyuan3D-2-spaces/hy3dgen/texgen/custom_rasterizer/build/temp.linux-x86_64-cpython-311/lib/custom_rasterizer_kernel/rasterizer.o 6b1f5e5e4b199209 +4 13455 1737469910695486266 /apdcephfs_cq5/share_300600172/huiwenshi/repos/Hunyuan3D-2-spaces/hy3dgen/texgen/custom_rasterizer/build/temp.linux-x86_64-cpython-311/lib/custom_rasterizer_kernel/grid_neighbor.o af3659b839e5e6e4 +6 34765 1737469932096669642 /apdcephfs_cq5/share_300600172/huiwenshi/repos/Hunyuan3D-2-spaces/hy3dgen/texgen/custom_rasterizer/build/temp.linux-x86_64-cpython-311/lib/custom_rasterizer_kernel/rasterizer_gpu.o f5d05646c31ca370 diff --git a/hy3dgen/texgen/custom_rasterizer/build/temp.linux-x86_64-cpython-311/build.ninja b/hy3dgen/texgen/custom_rasterizer/build/temp.linux-x86_64-cpython-311/build.ninja new file mode 100644 index 0000000000000000000000000000000000000000..fb26eea1e35d1f43eba8e2b4be3527f6072dce16 --- /dev/null +++ b/hy3dgen/texgen/custom_rasterizer/build/temp.linux-x86_64-cpython-311/build.ninja @@ -0,0 +1,34 @@ +ninja_required_version = 1.3 +cxx = c++ +nvcc = /usr/local/cuda/bin/nvcc + +cflags = -pthread -B /opt/conda/envs/hunyuan3d-2-open/compiler_compat -DNDEBUG -fwrapv -O2 -Wall -fPIC -O2 -isystem /opt/conda/envs/hunyuan3d-2-open/include -fPIC -O2 -isystem /opt/conda/envs/hunyuan3d-2-open/include -fPIC -I/opt/conda/envs/hunyuan3d-2-open/lib/python3.11/site-packages/torch/include -I/opt/conda/envs/hunyuan3d-2-open/lib/python3.11/site-packages/torch/include/torch/csrc/api/include -I/opt/conda/envs/hunyuan3d-2-open/lib/python3.11/site-packages/torch/include/TH -I/opt/conda/envs/hunyuan3d-2-open/lib/python3.11/site-packages/torch/include/THC -I/usr/local/cuda/include -I/opt/conda/envs/hunyuan3d-2-open/include/python3.11 -c +post_cflags = -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=custom_rasterizer_kernel -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++17 +cuda_cflags = -I/opt/conda/envs/hunyuan3d-2-open/lib/python3.11/site-packages/torch/include -I/opt/conda/envs/hunyuan3d-2-open/lib/python3.11/site-packages/torch/include/torch/csrc/api/include -I/opt/conda/envs/hunyuan3d-2-open/lib/python3.11/site-packages/torch/include/TH -I/opt/conda/envs/hunyuan3d-2-open/lib/python3.11/site-packages/torch/include/THC -I/usr/local/cuda/include -I/opt/conda/envs/hunyuan3d-2-open/include/python3.11 -c +cuda_post_cflags = -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=custom_rasterizer_kernel -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_90,code=compute_90 -gencode=arch=compute_90,code=sm_90 -std=c++17 +cuda_dlink_post_cflags = +ldflags = + +rule compile + command = $cxx -MMD -MF $out.d $cflags -c $in -o $out $post_cflags + depfile = $out.d + deps = gcc + +rule cuda_compile + depfile = $out.d + deps = gcc + command = $nvcc --generate-dependencies-with-compile --dependency-output $out.d $cuda_cflags -c $in -o $out $cuda_post_cflags + + + + + +build /apdcephfs_cq5/share_300600172/huiwenshi/repos/Hunyuan3D-2-spaces/hy3dgen/texgen/custom_rasterizer/build/temp.linux-x86_64-cpython-311/lib/custom_rasterizer_kernel/grid_neighbor.o: compile /apdcephfs_cq5/share_300600172/huiwenshi/repos/Hunyuan3D-2-spaces/hy3dgen/texgen/custom_rasterizer/lib/custom_rasterizer_kernel/grid_neighbor.cpp +build /apdcephfs_cq5/share_300600172/huiwenshi/repos/Hunyuan3D-2-spaces/hy3dgen/texgen/custom_rasterizer/build/temp.linux-x86_64-cpython-311/lib/custom_rasterizer_kernel/rasterizer.o: compile /apdcephfs_cq5/share_300600172/huiwenshi/repos/Hunyuan3D-2-spaces/hy3dgen/texgen/custom_rasterizer/lib/custom_rasterizer_kernel/rasterizer.cpp +build /apdcephfs_cq5/share_300600172/huiwenshi/repos/Hunyuan3D-2-spaces/hy3dgen/texgen/custom_rasterizer/build/temp.linux-x86_64-cpython-311/lib/custom_rasterizer_kernel/rasterizer_gpu.o: cuda_compile /apdcephfs_cq5/share_300600172/huiwenshi/repos/Hunyuan3D-2-spaces/hy3dgen/texgen/custom_rasterizer/lib/custom_rasterizer_kernel/rasterizer_gpu.cu + + + + + + diff --git a/hy3dgen/texgen/custom_rasterizer/build/temp.linux-x86_64-cpython-311/lib/custom_rasterizer_kernel/grid_neighbor.o b/hy3dgen/texgen/custom_rasterizer/build/temp.linux-x86_64-cpython-311/lib/custom_rasterizer_kernel/grid_neighbor.o new file mode 100644 index 0000000000000000000000000000000000000000..372a5daca94d37bb722a058e89a13e2153bc6341 Binary files /dev/null and b/hy3dgen/texgen/custom_rasterizer/build/temp.linux-x86_64-cpython-311/lib/custom_rasterizer_kernel/grid_neighbor.o differ diff --git a/hy3dgen/texgen/custom_rasterizer/build/temp.linux-x86_64-cpython-311/lib/custom_rasterizer_kernel/rasterizer.o b/hy3dgen/texgen/custom_rasterizer/build/temp.linux-x86_64-cpython-311/lib/custom_rasterizer_kernel/rasterizer.o new file mode 100644 index 0000000000000000000000000000000000000000..ec8fed027a0fe9a3339c8aeb51bfbeaf3b47f570 Binary files /dev/null and b/hy3dgen/texgen/custom_rasterizer/build/temp.linux-x86_64-cpython-311/lib/custom_rasterizer_kernel/rasterizer.o differ diff --git a/hy3dgen/texgen/custom_rasterizer/build/temp.linux-x86_64-cpython-311/lib/custom_rasterizer_kernel/rasterizer_gpu.o b/hy3dgen/texgen/custom_rasterizer/build/temp.linux-x86_64-cpython-311/lib/custom_rasterizer_kernel/rasterizer_gpu.o new file mode 100644 index 0000000000000000000000000000000000000000..963b6b8213309be5897c47fd976db2df8edafb3a Binary files /dev/null and b/hy3dgen/texgen/custom_rasterizer/build/temp.linux-x86_64-cpython-311/lib/custom_rasterizer_kernel/rasterizer_gpu.o differ diff --git a/hy3dgen/texgen/custom_rasterizer/custom_rasterizer.egg-info/PKG-INFO b/hy3dgen/texgen/custom_rasterizer/custom_rasterizer.egg-info/PKG-INFO new file mode 100644 index 0000000000000000000000000000000000000000..4fd8d7197973d690207193769b1355f2aab0f91d --- /dev/null +++ b/hy3dgen/texgen/custom_rasterizer/custom_rasterizer.egg-info/PKG-INFO @@ -0,0 +1,3 @@ +Metadata-Version: 2.1 +Name: custom_rasterizer +Version: 0.1 diff --git a/hy3dgen/texgen/custom_rasterizer/custom_rasterizer.egg-info/SOURCES.txt b/hy3dgen/texgen/custom_rasterizer/custom_rasterizer.egg-info/SOURCES.txt new file mode 100644 index 0000000000000000000000000000000000000000..ca40e02e41f7ba071df02ce368bfefec2847a6ad --- /dev/null +++ b/hy3dgen/texgen/custom_rasterizer/custom_rasterizer.egg-info/SOURCES.txt @@ -0,0 +1,12 @@ +setup.py +./custom_rasterizer/__init__.py +./custom_rasterizer/io_glb.py +./custom_rasterizer/io_obj.py +./custom_rasterizer/render.py +custom_rasterizer.egg-info/PKG-INFO +custom_rasterizer.egg-info/SOURCES.txt +custom_rasterizer.egg-info/dependency_links.txt +custom_rasterizer.egg-info/top_level.txt +lib/custom_rasterizer_kernel/grid_neighbor.cpp +lib/custom_rasterizer_kernel/rasterizer.cpp +lib/custom_rasterizer_kernel/rasterizer_gpu.cu \ No newline at end of file diff --git a/hy3dgen/texgen/custom_rasterizer/custom_rasterizer.egg-info/dependency_links.txt b/hy3dgen/texgen/custom_rasterizer/custom_rasterizer.egg-info/dependency_links.txt new file mode 100644 index 0000000000000000000000000000000000000000..8b137891791fe96927ad78e64b0aad7bded08bdc --- /dev/null +++ b/hy3dgen/texgen/custom_rasterizer/custom_rasterizer.egg-info/dependency_links.txt @@ -0,0 +1 @@ + diff --git a/hy3dgen/texgen/custom_rasterizer/custom_rasterizer.egg-info/top_level.txt b/hy3dgen/texgen/custom_rasterizer/custom_rasterizer.egg-info/top_level.txt new file mode 100644 index 0000000000000000000000000000000000000000..4880ad0e94189fc44fe2052edd5eaa0fcdbdb7e8 --- /dev/null +++ b/hy3dgen/texgen/custom_rasterizer/custom_rasterizer.egg-info/top_level.txt @@ -0,0 +1,2 @@ +custom_rasterizer +custom_rasterizer_kernel diff --git a/hy3dgen/texgen/differentiable_renderer/__pycache__/__init__.cpython-311.pyc b/hy3dgen/texgen/differentiable_renderer/__pycache__/__init__.cpython-311.pyc new file mode 100644 index 0000000000000000000000000000000000000000..7c909687eb9745d0422a372a9b17cb77cbb7b95f Binary files /dev/null and b/hy3dgen/texgen/differentiable_renderer/__pycache__/__init__.cpython-311.pyc differ diff --git a/hy3dgen/texgen/differentiable_renderer/__pycache__/camera_utils.cpython-311.pyc b/hy3dgen/texgen/differentiable_renderer/__pycache__/camera_utils.cpython-311.pyc new file mode 100644 index 0000000000000000000000000000000000000000..d4910aac2dc7427ff1814f80ddc405f2d2fb5659 Binary files /dev/null and b/hy3dgen/texgen/differentiable_renderer/__pycache__/camera_utils.cpython-311.pyc differ diff --git a/hy3dgen/texgen/differentiable_renderer/__pycache__/mesh_render.cpython-311.pyc b/hy3dgen/texgen/differentiable_renderer/__pycache__/mesh_render.cpython-311.pyc new file mode 100644 index 0000000000000000000000000000000000000000..4dbe21e2aaa7e76654e199320c264c873840fd38 Binary files /dev/null and b/hy3dgen/texgen/differentiable_renderer/__pycache__/mesh_render.cpython-311.pyc differ diff --git a/hy3dgen/texgen/differentiable_renderer/__pycache__/mesh_utils.cpython-311.pyc b/hy3dgen/texgen/differentiable_renderer/__pycache__/mesh_utils.cpython-311.pyc new file mode 100644 index 0000000000000000000000000000000000000000..6863b8e6fb94a3c9ad5d413eea2dec3bb8620b0f Binary files /dev/null and b/hy3dgen/texgen/differentiable_renderer/__pycache__/mesh_utils.cpython-311.pyc differ diff --git a/hy3dgen/texgen/differentiable_renderer/mesh_processor.cpython-311-x86_64-linux-gnu.so b/hy3dgen/texgen/differentiable_renderer/mesh_processor.cpython-311-x86_64-linux-gnu.so new file mode 100644 index 0000000000000000000000000000000000000000..42890fece062ce38cfd31c7fb8beb7138fcdb56e Binary files /dev/null and b/hy3dgen/texgen/differentiable_renderer/mesh_processor.cpython-311-x86_64-linux-gnu.so differ diff --git a/hy3dgen/texgen/utils/__pycache__/__init__.cpython-311.pyc b/hy3dgen/texgen/utils/__pycache__/__init__.cpython-311.pyc new file mode 100644 index 0000000000000000000000000000000000000000..8408699c88b69c7c94e4f6503bfd3be447f333d1 Binary files /dev/null and b/hy3dgen/texgen/utils/__pycache__/__init__.cpython-311.pyc differ diff --git a/hy3dgen/texgen/utils/__pycache__/dehighlight_utils.cpython-311.pyc b/hy3dgen/texgen/utils/__pycache__/dehighlight_utils.cpython-311.pyc new file mode 100644 index 0000000000000000000000000000000000000000..c892752819297936fbc05fe395ff72aa3a3eb624 Binary files /dev/null and b/hy3dgen/texgen/utils/__pycache__/dehighlight_utils.cpython-311.pyc differ diff --git a/hy3dgen/texgen/utils/__pycache__/multiview_utils.cpython-311.pyc b/hy3dgen/texgen/utils/__pycache__/multiview_utils.cpython-311.pyc new file mode 100644 index 0000000000000000000000000000000000000000..a4f72f08cccd3e2985a56a257789c020d35e88c3 Binary files /dev/null and b/hy3dgen/texgen/utils/__pycache__/multiview_utils.cpython-311.pyc differ diff --git a/hy3dgen/texgen/utils/__pycache__/uv_warp_utils.cpython-311.pyc b/hy3dgen/texgen/utils/__pycache__/uv_warp_utils.cpython-311.pyc new file mode 100644 index 0000000000000000000000000000000000000000..debca7421f977b8de64612c3dbc78206709d7d37 Binary files /dev/null and b/hy3dgen/texgen/utils/__pycache__/uv_warp_utils.cpython-311.pyc differ