Update README.md
Browse files
README.md
CHANGED
@@ -1,10 +1,50 @@
|
|
1 |
---
|
2 |
-
license:
|
3 |
pipeline_tag: text-to-image
|
4 |
---
|
5 |
|
6 |
-
|
7 |
|
8 |
-
|
9 |
|
10 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
license: apache-2.0
|
3 |
pipeline_tag: text-to-image
|
4 |
---
|
5 |
|
6 |
+
# MV-Adapter Model Card
|
7 |
|
8 |
+
<div align="center">
|
9 |
|
10 |
+
[**Project Page**](https://huanngzh.github.io/MV-Adapter-Page/) **|** [**Paper (ArXiv)**](https://arxiv.org/abs/2412.03632) **|** [**Paper (HF)**](https://hf.co/papers/2412.03632) **|** [**Code**](https://github.com/huanngzh/MV-Adapter) **|** [**Gradio demo**](https://huggingface.co/spaces/VAST-AI/MV-Adapter-I2MV-SDXL)
|
11 |
+
|
12 |
+
Create High-fidelity Multi-view Images with Various Base T2I Models and Various Conditions.
|
13 |
+
</div>
|
14 |
+
|
15 |
+
## Introduction
|
16 |
+
MV-Adapter is a creative productivity tool that seamlessly transfer text-to-image models to multi-view generators.
|
17 |
+
|
18 |
+
Highlights:
|
19 |
+
- 768x768 multi-view images
|
20 |
+
- work well with personalized models (e.g. DreamShaper, Animagine), LCM, ControlNet
|
21 |
+
- support text or image to multi-view (reconstruct 3D thereafter), or with geometry guidance for 3D texture generation
|
22 |
+
- arbitrary view generation
|
23 |
+
|
24 |
+
## Examples
|
25 |
+
|
26 |
+
<video controls autoplay src="https://cdn-uploads.huggingface.co/production/uploads/6375d136dee28348a9c63cbf/P7ywma2xFX-_SfEj1cJmY.mp4"></video>
|
27 |
+
|
28 |
+
## Model Details
|
29 |
+
| Model | Base Model | HF Weights | Demo Link |
|
30 |
+
| :-------------------------: | :--------: | :------------------------------------------------------------------------------------------------------------------------: | :-------------------------------------------------------------------------------------------------------------------------------------------: |
|
31 |
+
| Text-to-Multiview | SDXL | [mvadapter_t2mv_sdxl.safetensors](https://huggingface.co/huanngzh/mv-adapter/resolve/main/mvadapter_t2mv_sdxl.safetensors) | [General](https://huggingface.co/spaces/VAST-AI/MV-Adapter-T2MV-SDXL) / [Anime](https://huggingface.co/spaces/huanngzh/MV-Adapter-T2MV-Anime) |
|
32 |
+
| Image-to-Multiview | SDXL | [mvadapter_i2mv_sdxl.safetensors](https://huggingface.co/huanngzh/mv-adapter/resolve/main/mvadapter_t2mv_sdxl.safetensors) | [Demo](https://huggingface.co/spaces/VAST-AI/MV-Adapter-I2MV-SDXL) |
|
33 |
+
| Text-Geometry-to-Multiview | SDXL | | |
|
34 |
+
| Image-Geometry-to-Multiview | SDXL | | |
|
35 |
+
| Image-to-Arbitrary-Views | SDXL | |
|
36 |
+
|
37 |
+
## Usage
|
38 |
+
|
39 |
+
Refer to our [Github repository](https://github.com/huanngzh/MV-Adapter).
|
40 |
+
|
41 |
+
## Citation
|
42 |
+
If you find this work helpful, please consider citing our paper:
|
43 |
+
```bibtex
|
44 |
+
@article{huang2024mvadapter,
|
45 |
+
title={MV-Adapter: Multi-view Consistent Image Generation Made Easy},
|
46 |
+
author={Huang, Zehuan and Guo, Yuanchen and Wang, Haoran and Yi, Ran and Ma, Lizhuang and Cao, Yan-Pei and Sheng, Lu},
|
47 |
+
journal={arXiv preprint arXiv:2412.03632},
|
48 |
+
year={2024}
|
49 |
+
}
|
50 |
+
```
|