zeroMN commited on
Commit
cc5507b
·
verified ·
1 Parent(s): f607b5e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +14 -13
README.md CHANGED
@@ -36,27 +36,28 @@ pipeline_tag: text2text-generation
36
 
37
  ### Model Description
38
 
39
- This model, named `SG0.1.pth`, is a multimodal transformer designed to handle a variety of tasks including vision and audio processing. It is built on top of the `adapter-transformers` and `transformers` libraries and is intended to be a versatile base model for both direct use and fine-tuning.
40
 
41
- - **Developed by:** [Your Organization/Individual]
42
- - **Funded by:** [Funding Organization/Individual (if applicable)]
43
- - **Shared by:** [Your Organization/Individual]
44
- - **Model type:** Multimodal Transformer
45
- - **Language(s) (NLP):** English
46
- - **License:** Apache-2.0
47
- - **Finetuned from model:** [Pretrained Model Name (if applicable)]
 
48
 
49
  ### Model Sources
50
 
51
- - **Repository:** [GitHub Repository URL](https://github.com/your-username/your-repo)
52
  - **Paper:** [Paper Title](https://arxiv.org/abs/your-paper-id) (if applicable)
53
- - **Demo:** [Demo URL](https://your-demo-url) (if applicable)
54
 
55
- ## Uses
56
 
57
  ### Direct Use
58
 
59
- The `SG0.1.pth` model can be used directly for tasks such as image classification, object detection, and audio processing without any fine-tuning. It is designed to handle a wide range of input modalities and can be integrated into various applications.
60
 
61
  ### Downstream Use
62
 
@@ -78,7 +79,7 @@ Users (both direct and downstream) should be made aware of the following risks,
78
 
79
  ## How to Get Started with the Model
80
 
81
- Use the code below to get started with the `SG0.1.pth` model.
82
 
83
  ```python
84
  import torch
 
36
 
37
  ### Model Description
38
 
39
+ This model, named `SG1.0.pth`, is a multimodal transformer designed to handle a variety of tasks including vision and audio processing. It is built on top of the `adapter-transformers` and `transformers` libraries and is intended to be a versatile base model for both direct use and fine-tuning.
40
 
41
+ --
42
+ **Developed by:** Independent researcher
43
+ **Funded by :** Self-funded
44
+ **Shared by :** Independent researcher
45
+ **Model type:** Multimodal
46
+ **Language(s) (NLP):** English zh
47
+ **License:** Apache-2.0
48
+ **Finetuned from model :** None
49
 
50
  ### Model Sources
51
 
52
+ - **Repository:** [GitHub Repository URL](https://huggingface.co/zeroMN/SG1.0)
53
  - **Paper:** [Paper Title](https://arxiv.org/abs/your-paper-id) (if applicable)
54
+ - **Demo:** [Demo URL](https://huggingface.co/spaces/zeroMN/zeroMN-SG1.0) (if applicable)
55
 
56
+ ## Useshttps://huggingface.co/spaces/zeroMN/zeroMN-SG1.0
57
 
58
  ### Direct Use
59
 
60
+ The `SG1.0.pth` model can be used directly for tasks such as image classification, object detection, and audio processing without any fine-tuning. It is designed to handle a wide range of input modalities and can be integrated into various applications.
61
 
62
  ### Downstream Use
63
 
 
79
 
80
  ## How to Get Started with the Model
81
 
82
+ Use the code below to get started with the `SG1.0.pth` model.
83
 
84
  ```python
85
  import torch