Update README.md
Browse files
README.md
CHANGED
@@ -10,6 +10,8 @@ Please refer to the [project homepage](https://omages.github.io/), [arxiv page](
|
|
10 |
# Dataset details
|
11 |
We first download the .glb [ABO shapes](https://amazon-berkeley-objects.s3.amazonaws.com/archives/abo-3dmodels.tar), then we turn the .glb files into 1024x1024x12 *object images* using Blender 4.0. We set the maximum number of patches to 64 and set the margin to be 2%. The 1024 resolution data is in the `data/` folder, and its zipped archive is stored in `omages_ABO_p64_m02_1024.tar_partaa` and `omages_ABO_p64_m02_1024.tar_partab`. Please refer to our GitHub repository for instructions on downloading, combining, and extracting the files.
|
12 |
|
|
|
|
|
13 |
# Citation Information
|
14 |
@misc{yan2024objectworth64x64pixels,
|
15 |
title={An Object is Worth 64x64 Pixels: Generating 3D Object via Image Diffusion},
|
|
|
10 |
# Dataset details
|
11 |
We first download the .glb [ABO shapes](https://amazon-berkeley-objects.s3.amazonaws.com/archives/abo-3dmodels.tar), then we turn the .glb files into 1024x1024x12 *object images* using Blender 4.0. We set the maximum number of patches to 64 and set the margin to be 2%. The 1024 resolution data is in the `data/` folder, and its zipped archive is stored in `omages_ABO_p64_m02_1024.tar_partaa` and `omages_ABO_p64_m02_1024.tar_partab`. Please refer to our GitHub repository for instructions on downloading, combining, and extracting the files.
|
12 |
|
13 |
+
Then, we downsample the 1024 resolution omages to 64 resolution using sparse pooling described in the paper. And we put everything together into the 'df_p64_m02_res64.h5', where the dataset loader will read data items from it.
|
14 |
+
|
15 |
# Citation Information
|
16 |
@misc{yan2024objectworth64x64pixels,
|
17 |
title={An Object is Worth 64x64 Pixels: Generating 3D Object via Image Diffusion},
|