Update README.md
Browse files
README.md
CHANGED
@@ -25,9 +25,12 @@ This model has a wide range of applications and can reason and generate videos b
|
|
25 |
|
26 |
## How to use
|
27 |
|
28 |
-
|
|
|
|
|
|
|
|
|
29 |
|
30 |
-
For Colab usage, you can view [this webpage](https://colab.research.google.com/drive/1uW1ZqswkQ9Z9bp5Nbo5z59cAn7I0hE6R?usp=sharing).
|
31 |
|
32 |
### Operating environment (Python Package)
|
33 |
|
|
|
25 |
|
26 |
## How to use
|
27 |
|
28 |
+
|
29 |
+
The model has been launched on [ModelScope Studio](https://modelscope.cn/studios/damo/text-to-video-synthesis/summary) and [huggingface](https://huggingface.co/spaces/damo-vilab/modelscope-text-to-video-synthesis), you can experience it directly; you can also refer to [Colab page](https://colab.research.google.com/drive/1uW1ZqswkQ9Z9bp5Nbo5z59cAn7I0hE6R?usp=sharing#scrollTo=bSluBq99ObSk) to build it yourself.
|
30 |
+
In order to facilitate the experience of the model, users can refer to the [Aliyun Notebook Tutorial](https://modelscope.cn/headlines/detail/26) to quickly develop this Text-to-Video model.
|
31 |
+
|
32 |
+
This demo requires about 16GB CPU RAM and 16GB GPU RAM. Under the ModelScope framework, the current model can be used by calling a simple Pipeline, where the input must be in dictionary format, the legal key value is 'text', and the content is a short text. This model currently only supports inference on the GPU. Enter specific code examples as follows:
|
33 |
|
|
|
34 |
|
35 |
### Operating environment (Python Package)
|
36 |
|