zeroMN commited on
Commit
b9a98c8
·
verified ·
1 Parent(s): 036cdb2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +15 -8
README.md CHANGED
@@ -44,13 +44,20 @@ pipeline_tag: text-generation
44
  This model, named `Evolutionary Multi-Modal Model`, is a multimodal transformer designed to handle a variety of tasks including vision and audio processing. It is built on top of the `adapter-transformers` and `transformers` libraries and is intended to be a versatile base model for both direct use and fine-tuning.
45
 
46
  --
47
- **Developed by:** Independent researcher
48
- **Funded by :** Self-funded
49
- **Shared by :** Independent researcher
50
- **Model type:** MEvolutionary Multi-Modal Model
51
- **Language(s) (NLP):** English zh
52
- **License:** Apache-2.0
53
- **Finetuned from model :** None
 
 
 
 
 
 
 
54
 
55
 
56
 
@@ -72,7 +79,7 @@ print(generated_text)
72
  ```
73
  ### Downstream Use
74
 
75
- The model can be fine-tuned for specific tasks such as visual question answering (VQA), image captioning, and audio recognition. It is particularly useful for multimodal tasks that require understanding both visual and audio inputs.
76
 
77
  ### Out-of-Scope Use
78
 
 
44
  This model, named `Evolutionary Multi-Modal Model`, is a multimodal transformer designed to handle a variety of tasks including vision and audio processing. It is built on top of the `adapter-transformers` and `transformers` libraries and is intended to be a versatile base model for both direct use and fine-tuning.
45
 
46
  --
47
+ **Developed
48
+ by:** Independent researcher
49
+ **Funded
50
+ by :** Self-funded
51
+ **Shared
52
+ by :** Independent researcher
53
+ **Model
54
+ type:** MEvolutionary Multi-Modal Model
55
+ **Language(s)
56
+ (NLP):** English zh
57
+ **License:**
58
+ Apache-2.0
59
+ **Finetuned from model**
60
+ None
61
 
62
 
63
 
 
79
  ```
80
  ### Downstream Use
81
 
82
+ The model can be fine-tuned for specific tasks such as visual question answering (VQA), image captioning, and audio recognition.
83
 
84
  ### Out-of-Scope Use
85