foreverpiano commited on
Commit
61a7197
·
verified ·
1 Parent(s): 725712e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -0
README.md CHANGED
@@ -49,6 +49,8 @@ We provide some qualitative comparison between FastHunyuan 6 step inference v.s.
49
 
50
  ## Memory requirements
51
 
 
 
52
  For inference, we can inference FastHunyuan on single RTX4090. We now support NF4 and LLM-INT8 quantized inference using BitsAndBytes for FastHunyuan. With NF4 quantization, inference can be performed on a single RTX 4090 GPU, requiring just 20GB of VRAM.
53
 
54
  For Lora Finetune, minimum hardware requirement
 
49
 
50
  ## Memory requirements
51
 
52
+ Please check our github repo for details. https://github.com/hao-ai-lab/FastVideo
53
+
54
  For inference, we can inference FastHunyuan on single RTX4090. We now support NF4 and LLM-INT8 quantized inference using BitsAndBytes for FastHunyuan. With NF4 quantization, inference can be performed on a single RTX 4090 GPU, requiring just 20GB of VRAM.
55
 
56
  For Lora Finetune, minimum hardware requirement