MaoXun commited on
Commit
88897c3
·
verified ·
1 Parent(s): 8b6c69b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +15 -2
README.md CHANGED
@@ -11,11 +11,24 @@ language:
11
  base_model:
12
  - liuhaotian/llava-pretrain-vicuna-7b-v1.3
13
  ---
 
14
  This is the LoRA Model of LLaVA 7B v1.3 trained on [Synergy-General-MultimodalPairs](https://huggingface.co/datasets/MaoXun/Synergy-General-MultimodalPairs).
15
 
16
- ## Training procedure
 
 
17
 
18
- ### Framework versions
 
19
 
 
 
 
20
 
 
 
 
 
 
 
21
  - PEFT 0.4.0
 
11
  base_model:
12
  - liuhaotian/llava-pretrain-vicuna-7b-v1.3
13
  ---
14
+ # Brief
15
  This is the LoRA Model of LLaVA 7B v1.3 trained on [Synergy-General-MultimodalPairs](https://huggingface.co/datasets/MaoXun/Synergy-General-MultimodalPairs).
16
 
17
+ # Dataset
18
+ ## Link
19
+ [Github](https://github.com/mao-code/Synergy-General-MultimodalPairs) | [Paper](https://link.springer.com/chapter/10.1007/978-981-97-6125-8_12)
20
 
21
+ ## Introduction
22
+ This is a visual-text pair dataset synergistically generated by a text-to-image model and multimodal large language model.
23
 
24
+ The name of the file means (n_th generation)\_(numbers of batch)\_(numbers of initial description of each batch)\_(numbers of refined cycles of each initial description)
25
+ For example, the 1_20_10_5.zip means this dataset is dataset number one with 20 batches, 10 initial descriptions for each batch, and 5 refined cycles for each initial description.
26
+ Therefore, this dataset has a total of 20\*10\*5=1000 image and text pair data.
27
 
28
+ Once you unzip one of the datasets, you will see 2 files. The first is the zip file of images. The second is the CSV file which contains the image path and the description of this image.
29
+
30
+ Here is the GitHub script of the generation process: https://github.com/mao-code/Synergy-General-MultimodalPairs
31
+
32
+
33
+ # Framework versions
34
  - PEFT 0.4.0