czczup commited on
Commit
bb03296
·
verified ·
1 Parent(s): cc8b439

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -15
README.md CHANGED
@@ -11,10 +11,6 @@ language:
11
  - multilingual
12
  tags:
13
  - internvl
14
- - vision
15
- - ocr
16
- - multi-image
17
- - video
18
  - custom_code
19
  ---
20
 
@@ -68,8 +64,6 @@ We introduce three simple designs:
68
 
69
  - We simultaneously use [InternVL](https://github.com/OpenGVLab/InternVL) and [VLMEvalKit](https://github.com/open-compass/VLMEvalKit) repositories for model evaluation. Specifically, the results reported for DocVQA, ChartQA, InfoVQA, TextVQA, MME, AI2D, MMBench, CCBench, MMVet, and SEED-Image were tested using the InternVL repository. OCRBench, RealWorldQA, HallBench, and MathVista were evaluated using the VLMEvalKit.
70
 
71
- - Please note that evaluating the same model using different testing toolkits like [InternVL](https://github.com/OpenGVLab/InternVL) and [VLMEvalKit](https://github.com/open-compass/VLMEvalKit) can result in slight differences, which is normal. Updates to code versions and variations in environment and hardware can also cause minor discrepancies in results.
72
-
73
  Limitations: Although we have made efforts to ensure the safety of the model during the training process and to encourage the model to generate text that complies with ethical and legal requirements, the model may still produce unexpected outputs due to its size and probabilistic generation paradigm. For example, the generated responses may contain biases, discrimination, or other harmful content. Please do not propagate such content. We are not responsible for any consequences resulting from the dissemination of harmful information.
74
 
75
  ## Examples
@@ -85,8 +79,6 @@ Limitations: Although we have made efforts to ensure the safety of the model dur
85
 
86
  We provide an example code to run InternVL-Chat-V1-5 using `transformers`.
87
 
88
- We also welcome you to experience the InternVL2 series models in our [online demo](https://internvl.opengvlab.com/).
89
-
90
  > Please use transformers>=4.37.2 to ensure the model works normally.
91
 
92
  ### Model Loading
@@ -555,7 +547,7 @@ print(response)
555
 
556
  ## License
557
 
558
- This project is released under the MIT license, while InternLM2 is licensed under the Apache-2.0 license.
559
 
560
  ## Citation
561
 
@@ -568,16 +560,16 @@ If you find this project useful in your research, please consider citing:
568
  journal={arXiv preprint arXiv:2410.16261},
569
  year={2024}
570
  }
571
- @article{chen2023internvl,
572
- title={InternVL: Scaling up Vision Foundation Models and Aligning for Generic Visual-Linguistic Tasks},
573
- author={Chen, Zhe and Wu, Jiannan and Wang, Wenhai and Su, Weijie and Chen, Guo and Xing, Sen and Zhong, Muyan and Zhang, Qinglong and Zhu, Xizhou and Lu, Lewei and Li, Bin and Luo, Ping and Lu, Tong and Qiao, Yu and Dai, Jifeng},
574
- journal={arXiv preprint arXiv:2312.14238},
575
- year={2023}
576
- }
577
  @article{chen2024far,
578
  title={How Far Are We to GPT-4V? Closing the Gap to Commercial Multimodal Models with Open-Source Suites},
579
  author={Chen, Zhe and Wang, Weiyun and Tian, Hao and Ye, Shenglong and Gao, Zhangwei and Cui, Erfei and Tong, Wenwen and Hu, Kongzhi and Luo, Jiapeng and Ma, Zheng and others},
580
  journal={arXiv preprint arXiv:2404.16821},
581
  year={2024}
582
  }
 
 
 
 
 
 
583
  ```
 
11
  - multilingual
12
  tags:
13
  - internvl
 
 
 
 
14
  - custom_code
15
  ---
16
 
 
64
 
65
  - We simultaneously use [InternVL](https://github.com/OpenGVLab/InternVL) and [VLMEvalKit](https://github.com/open-compass/VLMEvalKit) repositories for model evaluation. Specifically, the results reported for DocVQA, ChartQA, InfoVQA, TextVQA, MME, AI2D, MMBench, CCBench, MMVet, and SEED-Image were tested using the InternVL repository. OCRBench, RealWorldQA, HallBench, and MathVista were evaluated using the VLMEvalKit.
66
 
 
 
67
  Limitations: Although we have made efforts to ensure the safety of the model during the training process and to encourage the model to generate text that complies with ethical and legal requirements, the model may still produce unexpected outputs due to its size and probabilistic generation paradigm. For example, the generated responses may contain biases, discrimination, or other harmful content. Please do not propagate such content. We are not responsible for any consequences resulting from the dissemination of harmful information.
68
 
69
  ## Examples
 
79
 
80
  We provide an example code to run InternVL-Chat-V1-5 using `transformers`.
81
 
 
 
82
  > Please use transformers>=4.37.2 to ensure the model works normally.
83
 
84
  ### Model Loading
 
547
 
548
  ## License
549
 
550
+ This project is released under the MIT License. This project uses the pre-trained internlm2-chat-20b as a component, which is licensed under the Apache License 2.0.
551
 
552
  ## Citation
553
 
 
560
  journal={arXiv preprint arXiv:2410.16261},
561
  year={2024}
562
  }
 
 
 
 
 
 
563
  @article{chen2024far,
564
  title={How Far Are We to GPT-4V? Closing the Gap to Commercial Multimodal Models with Open-Source Suites},
565
  author={Chen, Zhe and Wang, Weiyun and Tian, Hao and Ye, Shenglong and Gao, Zhangwei and Cui, Erfei and Tong, Wenwen and Hu, Kongzhi and Luo, Jiapeng and Ma, Zheng and others},
566
  journal={arXiv preprint arXiv:2404.16821},
567
  year={2024}
568
  }
569
+ @article{chen2023internvl,
570
+ title={InternVL: Scaling up Vision Foundation Models and Aligning for Generic Visual-Linguistic Tasks},
571
+ author={Chen, Zhe and Wu, Jiannan and Wang, Wenhai and Su, Weijie and Chen, Guo and Xing, Sen and Zhong, Muyan and Zhang, Qinglong and Zhu, Xizhou and Lu, Lewei and Li, Bin and Luo, Ping and Lu, Tong and Qiao, Yu and Dai, Jifeng},
572
+ journal={arXiv preprint arXiv:2312.14238},
573
+ year={2023}
574
+ }
575
  ```