Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,47 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
pipeline_tag: text-to-video
|
3 |
+
---
|
4 |
+
AnimateDiff is a method that allows you to create videos using pre-existing Stable Diffusion Text to Image models.
|
5 |
+
|
6 |
+
Converted https://huggingface.co/guoyww/animatediff/blob/main/v3_sd15_mm.ckpt to Huggingface Diffusers format
|
7 |
+
using Diffuser's convetion script (available https://github.com/huggingface/diffusers/blob/main/scripts/convert_animatediff_motion_module_to_diffusers.py)
|
8 |
+
|
9 |
+
The following example demonstrates how you can utilize the motion modules with an existing Stable Diffusion text to image model.
|
10 |
+
|
11 |
+
|
12 |
+
```python
|
13 |
+
import torch
|
14 |
+
from diffusers import MotionAdapter, AnimateDiffPipeline, DDIMScheduler
|
15 |
+
from diffusers.utils import export_to_gif
|
16 |
+
|
17 |
+
# Load the motion adapter
|
18 |
+
adapter = MotionAdapter.from_pretrained("Warvito/animatediff-motion-adapter-v1-5-3")
|
19 |
+
# load SD 1.5 based finetuned model
|
20 |
+
model_id = "SG161222/Realistic_Vision_V5.1_noVAE"
|
21 |
+
pipe = AnimateDiffPipeline.from_pretrained(model_id, motion_adapter=adapter)
|
22 |
+
scheduler = DDIMScheduler.from_pretrained(
|
23 |
+
model_id, subfolder="scheduler", clip_sample=False, timestep_spacing="linspace", steps_offset=1
|
24 |
+
)
|
25 |
+
pipe.scheduler = scheduler
|
26 |
+
|
27 |
+
# enable memory savings
|
28 |
+
pipe.enable_vae_slicing()
|
29 |
+
pipe.enable_model_cpu_offload()
|
30 |
+
|
31 |
+
output = pipe(
|
32 |
+
prompt=(
|
33 |
+
"masterpiece, bestquality, highlydetailed, ultradetailed, sunset, "
|
34 |
+
"orange sky, warm lighting, fishing boats, ocean waves seagulls, "
|
35 |
+
"rippling water, wharf, silhouette, serene atmosphere, dusk, evening glow, "
|
36 |
+
"golden hour, coastal landscape, seaside scenery"
|
37 |
+
),
|
38 |
+
negative_prompt="bad quality, worse quality",
|
39 |
+
num_frames=16,
|
40 |
+
guidance_scale=7.5,
|
41 |
+
num_inference_steps=25,
|
42 |
+
generator=torch.Generator("cpu").manual_seed(42),
|
43 |
+
)
|
44 |
+
frames = output.frames[0]
|
45 |
+
export_to_gif(frames, "animation.gif")
|
46 |
+
```
|
47 |
+
|