Update README.md
Browse files
README.md
CHANGED
@@ -11,12 +11,13 @@ extra_gated_fields:
|
|
11 |
I ALLOW Stability AI to email me about new model releases: checkbox
|
12 |
license: other
|
13 |
license_name: sai-nc-community
|
14 |
-
|
|
|
15 |
pipeline_tag: text-to-3d
|
16 |
---
|
17 |
# Stable Zero123
|
18 |
|
19 |
-
Please note: For commercial use, please refer to https://stability.ai/license
|
20 |
|
21 |
## Model Description
|
22 |
|
@@ -35,8 +36,8 @@ To use Stable Zero123 for object 3D mesh generation in [threestudio](https://git
|
|
35 |
|
36 |
1. Install threestudio using their [instructions](https://github.com/threestudio-project/threestudio#installation)
|
37 |
2. Download the Stable Zero123 checkpoint `stable_zero123.ckpt` into the `load/zero123/` directory
|
38 |
-
2. Take an image of your choice, or generate it from text using your favourite AI image generator such as
|
39 |
-
3. Remove its background using
|
40 |
4. Save to `load/images/`, preferably with `_rgba.png` as the suffix
|
41 |
5. Run Zero-1-to-3 with the Stable Zero123 ckpt:
|
42 |
```sh
|
@@ -49,8 +50,8 @@ python launch.py --config configs/stable-zero123.yaml --train --gpu 0 data.image
|
|
49 |
* **Model type**: latent diffusion model.
|
50 |
* **Finetuned from model**: [lambdalabs/sd-image-variations-diffusers](https://huggingface.co/lambdalabs/sd-image-variations-diffusers)
|
51 |
* **License**: We released 2 versions of Stable Zero123.
|
52 |
-
* **Stable Zero123** included some CC-BY-NC 3D objects, so it cannot be used commercially, but can be used for research purposes. [StabilityAI Non-Commercial Research Community License](https://huggingface.co/stabilityai/zero123-sai/raw/main/
|
53 |
-
* **Stable Zero123C** (“C” for “Commercially-available”) was only trained on CC-BY and CC0 3D objects.
|
54 |
According to our internal tests, both models perform similarly in terms of prediction visual quality.
|
55 |
|
56 |
### Training Dataset
|
|
|
11 |
I ALLOW Stability AI to email me about new model releases: checkbox
|
12 |
license: other
|
13 |
license_name: sai-nc-community
|
14 |
+
license_link_stable_zero123: https://huggingface.co/stabilityai/sdxl-turbo/blob/main/LICENSE_stable_zero123.md
|
15 |
+
license_link_stable_zero123_c: https://huggingface.co/stabilityai/sdxl-turbo/blob/main/LICENSE_stable_zero123_c.md
|
16 |
pipeline_tag: text-to-3d
|
17 |
---
|
18 |
# Stable Zero123
|
19 |
|
20 |
+
Please note: For commercial use, please refer to https://stability.ai/license
|
21 |
|
22 |
## Model Description
|
23 |
|
|
|
36 |
|
37 |
1. Install threestudio using their [instructions](https://github.com/threestudio-project/threestudio#installation)
|
38 |
2. Download the Stable Zero123 checkpoint `stable_zero123.ckpt` into the `load/zero123/` directory
|
39 |
+
2. Take an image of your choice, or generate it from text using your favourite AI image generator such as Stable Assistant (https://stability.ai/stable-assistant) E.g. "A simple 3D render of a friendly dog"
|
40 |
+
3. Remove its background using Stable Assistant (https://stability.ai/stable-assistant)
|
41 |
4. Save to `load/images/`, preferably with `_rgba.png` as the suffix
|
42 |
5. Run Zero-1-to-3 with the Stable Zero123 ckpt:
|
43 |
```sh
|
|
|
50 |
* **Model type**: latent diffusion model.
|
51 |
* **Finetuned from model**: [lambdalabs/sd-image-variations-diffusers](https://huggingface.co/lambdalabs/sd-image-variations-diffusers)
|
52 |
* **License**: We released 2 versions of Stable Zero123.
|
53 |
+
* **Stable Zero123** included some CC-BY-NC 3D objects, so it cannot be used commercially, but can be used for research purposes. [StabilityAI Non-Commercial Research Community License](https://huggingface.co/stabilityai/zero123-sai/raw/main/LICENSE_stable_zero123.md)
|
54 |
+
* **Stable Zero123C** (“C” for “Commercially-available”) was only trained on CC-BY and CC0 3D objects. It is released under [StabilityAI Community License](https://huggingface.co/stabilityai/zero123-sai/raw/main/LICENSE_stable_zero123.md). You can read more about the license [here](https://stability.ai/license)
|
55 |
According to our internal tests, both models perform similarly in terms of prediction visual quality.
|
56 |
|
57 |
### Training Dataset
|