foreverpiano
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -49,6 +49,8 @@ We provide some qualitative comparison between FastHunyuan 6 step inference v.s.
|
|
49 |
|
50 |
## Memory requirements
|
51 |
|
|
|
|
|
52 |
For inference, we can inference FastHunyuan on single RTX4090. We now support NF4 and LLM-INT8 quantized inference using BitsAndBytes for FastHunyuan. With NF4 quantization, inference can be performed on a single RTX 4090 GPU, requiring just 20GB of VRAM.
|
53 |
|
54 |
For Lora Finetune, minimum hardware requirement
|
|
|
49 |
|
50 |
## Memory requirements
|
51 |
|
52 |
+
Please check our github repo for details. https://github.com/hao-ai-lab/FastVideo
|
53 |
+
|
54 |
For inference, we can inference FastHunyuan on single RTX4090. We now support NF4 and LLM-INT8 quantized inference using BitsAndBytes for FastHunyuan. With NF4 quantization, inference can be performed on a single RTX 4090 GPU, requiring just 20GB of VRAM.
|
55 |
|
56 |
For Lora Finetune, minimum hardware requirement
|