小臣子吃大橙子
commited on
Commit
·
f315363
1
Parent(s):
b6d2702
upload MULTI_v1.3.1_20241210_release.zip and update README
Browse files- MULTI_v1.2.2_20240212_release.zip → MULTI_v1.3.1_20241210_release.zip +2 -2
- README.md +14 -28
- README_zh.md +11 -26
MULTI_v1.2.2_20240212_release.zip → MULTI_v1.3.1_20241210_release.zip
RENAMED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:8d234585be95049b3c9385d3e8e3a6901a854c3fba60532034be36b66ae86378
|
3 |
+
size 499451021
|
README.md
CHANGED
@@ -30,23 +30,6 @@ viewer: False
|
|
30 |
|
31 |
Rapid progress in multimodal large language models (MLLMs) highlights the need to introduce challenging yet realistic benchmarks to the academic community, while existing benchmarks primarily focus on understanding simple natural images and short context. In this paper, we present***MULTI***, as a cutting-edge benchmark for evaluating MLLMs on understanding complex tables and images, and reasoning with long context. **MULTI** provides multimodal inputs and requires responses that are either precise or open-ended, reflecting real-life examination styles. **MULTI** includes over 18,000 questions and challenges MLLMs with a variety of tasks, ranging from formula derivation to image detail analysis and cross-modality reasoning. We also introduce***MULTI-Elite***, a 500-question selected hard subset, and ***MULTI-Extend***, with more than 4,500 external knowledge context pieces. Our evaluation indicates significant potential for MLLM advancement, with GPT-4V achieving a **63.7%** accuracy rate on **MULTI**, in contrast to other MLLMs scoring between **28.5%** and **55.3%**. **MULTI** serves not only as a robust evaluation platform but also paves the way for the development of expert-level AI.
|
32 |
|
33 |
-
## 🏆 Leaderboard
|
34 |
-
|
35 |
-
| Modality | Model | Version | Overall | MULTI-Elite |
|
36 |
-
|:--------:|:-------------:| -------------------------- |:-------:|:-----------:|
|
37 |
-
| 🖼️ | GPT-4V | gpt-4-vision-preview | 63.7 | 14.0 |
|
38 |
-
| 🖼️ | Yi-VL | Yi-34B-Chat | 55.3 | 26.2 |
|
39 |
-
| 🖼️ | Gemini Vision | gemini-pro-vision | 53.7 | 12.4 |
|
40 |
-
| 📃 | Gemini | gemini-pro | 52.2 | 10.5 |
|
41 |
-
| 📃 | GPT-4 | gpt-4-1106-preview | 50.2 | 5.8 |
|
42 |
-
| 📃 | DFM-2.0 | dfm-2.0-70b-preview | 49.7 | 18.0 |
|
43 |
-
| 🖼️ | InternVL | InternVL-Chat-Chinese-V1.1 | 44.9 | 20.7 |
|
44 |
-
| 🖼️ | Qwen-VL | Qwen-VL-Chat | 39.0 | 10.5 |
|
45 |
-
| 📃 | ChatGPT | gpt-3.5-turbo-1106 | 35.9 | 4.7 |
|
46 |
-
| 🖼️ | VisCPM | VisCPM-Chat | 33.4 | 13.0 |
|
47 |
-
| 📃 | MOSS | moss-moon-003-sft | 32.6 | 13.1 |
|
48 |
-
| 🖼️ | VisualGLM | visualglm-6b | 31.1 | 12.8 |
|
49 |
-
| 🖼️ | Chinese-LLaVA | Chinese-LLaVA-Cllama2 | 28.5 | 12.3 |
|
50 |
|
51 |
## ⏬ Download
|
52 |
|
@@ -62,10 +45,13 @@ The structure of `./data` should be something like:
|
|
62 |
```
|
63 |
./data
|
64 |
├── images # folder containing images
|
65 |
-
├── problem_v1.
|
66 |
├── knowledge_v1.2.2_20240212_release.json # MULTI-Extend
|
67 |
-
├── hard_list_v1.
|
68 |
-
|
|
|
|
|
|
|
69 |
```
|
70 |
|
71 |
## 📝 How to Evaluate
|
@@ -94,17 +80,17 @@ If you just want to generate data for a specific setting (using `--debug` argume
|
|
94 |
|
95 |
For a quick start, see these examples:
|
96 |
|
97 |
-
Test GPT-
|
98 |
|
99 |
```shell
|
100 |
python eval.py \
|
101 |
-
--problem_file ../data/
|
102 |
-
--knowledge_file ../data/
|
103 |
--questions_type 0,1,2,3 \
|
104 |
--image_type 0,1,2 \
|
105 |
--input_type 2 \
|
106 |
-
--model gpt-
|
107 |
-
--model_version gpt-
|
108 |
--api_key sk-************************************************
|
109 |
```
|
110 |
|
@@ -112,9 +98,9 @@ Test Qwen-VL model on MULTI-Elite with image caption input, skip all questions n
|
|
112 |
|
113 |
```shell
|
114 |
python eval.py \
|
115 |
-
--problem_file ../data/
|
116 |
-
--subset ../data/
|
117 |
-
--caption_file ../data/
|
118 |
--questions_type 0,1 \
|
119 |
--image_type 1,2 \
|
120 |
--input_type 1 \
|
|
|
30 |
|
31 |
Rapid progress in multimodal large language models (MLLMs) highlights the need to introduce challenging yet realistic benchmarks to the academic community, while existing benchmarks primarily focus on understanding simple natural images and short context. In this paper, we present***MULTI***, as a cutting-edge benchmark for evaluating MLLMs on understanding complex tables and images, and reasoning with long context. **MULTI** provides multimodal inputs and requires responses that are either precise or open-ended, reflecting real-life examination styles. **MULTI** includes over 18,000 questions and challenges MLLMs with a variety of tasks, ranging from formula derivation to image detail analysis and cross-modality reasoning. We also introduce***MULTI-Elite***, a 500-question selected hard subset, and ***MULTI-Extend***, with more than 4,500 external knowledge context pieces. Our evaluation indicates significant potential for MLLM advancement, with GPT-4V achieving a **63.7%** accuracy rate on **MULTI**, in contrast to other MLLMs scoring between **28.5%** and **55.3%**. **MULTI** serves not only as a robust evaluation platform but also paves the way for the development of expert-level AI.
|
32 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
33 |
|
34 |
## ⏬ Download
|
35 |
|
|
|
45 |
```
|
46 |
./data
|
47 |
├── images # folder containing images
|
48 |
+
├── problem_v1.3.1_20241210_release.json # MULTI
|
49 |
├── knowledge_v1.2.2_20240212_release.json # MULTI-Extend
|
50 |
+
├── hard_list_v1.3.0_20241203.json # MULTI-Elite
|
51 |
+
├── captions_v1.3.1_20241210_blip.csv # image captions generated by BLIP-6.7B
|
52 |
+
├── captions_v1.3.1_20241210_points.csv # image captions generated by POINTS-1-5
|
53 |
+
├── ocr_v1.3.1_20241210_easyocr.csv # OCR data generated by EasyOCR
|
54 |
+
└── ocr_v1.3.1_20241210_points.csv # OCR data generated by POINTS-1-5
|
55 |
```
|
56 |
|
57 |
## 📝 How to Evaluate
|
|
|
80 |
|
81 |
For a quick start, see these examples:
|
82 |
|
83 |
+
Test GPT-4o model on whole MULTI with multimodal input, using MULTI-Extend as external knowledge:
|
84 |
|
85 |
```shell
|
86 |
python eval.py \
|
87 |
+
--problem_file ../data/problem_{version}.json \
|
88 |
+
--knowledge_file ../data/knowledge_{version}.json \
|
89 |
--questions_type 0,1,2,3 \
|
90 |
--image_type 0,1,2 \
|
91 |
--input_type 2 \
|
92 |
+
--model gpt-4o \
|
93 |
+
--model_version gpt-4o-latest \
|
94 |
--api_key sk-************************************************
|
95 |
```
|
96 |
|
|
|
98 |
|
99 |
```shell
|
100 |
python eval.py \
|
101 |
+
--problem_file ../data/problem_{version}.json \
|
102 |
+
--subset ../data/hard_list_{version}.json \
|
103 |
+
--caption_file ../data/captions_{version}.csv \
|
104 |
--questions_type 0,1 \
|
105 |
--image_type 1,2 \
|
106 |
--input_type 1 \
|
README_zh.md
CHANGED
@@ -22,24 +22,6 @@
|
|
22 |
|
23 |
在多模态大型语言模型(MLLMs)迅速进步的背景下,提出具有挑战性且符合现实场景的基准测试变得尤为重要,而现有的基准测试主要关注于理解简单的自然图像和短文本。在本文中,我们介绍了***MULTI***,作为一个前沿的基准测试,用于评测MLLMs在理解复杂的表格和图像、以及进行长文本推理的能力。**MULTI**提供多模态输入,并要求回答是精确的或开放式的,反映了现实生活中的考试风格。**MULTI**包括超过 18,000 个问题,挑战MLLMs进行多种任务,从公式推导到图像细节分析和跨模态推理。我们还引入了***MULTI-Elite***,一个精心挑选的包含500个问题的难题子集,以及***MULTI-Extend***,包含超过 4,500 个外部知识上下文。我们的评测显示了MLLMs进步的巨大潜力,GPT-4V在**MULTI**上的准确率达到了 **63.7%**,而其他MLLMs的得分介于 **28.5%** 和 **55.3%** 之间。**MULTI**不仅作为一个稳健的评测平台,也为专家级AI的发展指明了道路。
|
24 |
|
25 |
-
## 🏆 Leaderboard
|
26 |
-
|
27 |
-
| 模态 | 模型 | 版本 | 总体 | MULTI-Elite |
|
28 |
-
|:----:|:-------------:| -------------------------- |:----:|:-----------:|
|
29 |
-
| 🖼️ | GPT-4V | gpt-4-vision-preview | 63.7 | 14.0 |
|
30 |
-
| 🖼️ | Yi-VL | Yi-34B-Chat | 55.3 | 26.2 |
|
31 |
-
| 🖼️ | Gemini Vision | gemini-pro-vision | 53.7 | 12.4 |
|
32 |
-
| 📃 | Gemini | gemini-pro | 52.2 | 10.5 |
|
33 |
-
| 📃 | GPT-4 | gpt-4-1106-preview | 50.2 | 5.8 |
|
34 |
-
| 📃 | DFM-2.0 | dfm-2.0-70b-preview | 49.7 | 18.0 |
|
35 |
-
| 🖼️ | InternVL | InternVL-Chat-Chinese-V1.1 | 44.9 | 20.7 |
|
36 |
-
| 🖼️ | Qwen-VL | Qwen-VL-Chat | 39.0 | 10.5 |
|
37 |
-
| 📃 | ChatGPT | gpt-3.5-turbo-1106 | 35.9 | 4.7 |
|
38 |
-
| 🖼️ | VisCPM | VisCPM-Chat | 33.4 | 13.0 |
|
39 |
-
| 📃 | MOSS | moss-moon-003-sft | 32.6 | 13.1 |
|
40 |
-
| 🖼️ | VisualGLM | visualglm-6b | 31.1 | 12.8 |
|
41 |
-
| 🖼️ | Chinese-LLaVA | Chinese-LLaVA-Cllama2 | 28.5 | 12.3 |
|
42 |
-
|
43 |
## ⏬ 下载
|
44 |
|
45 |
您只需使用以下命令即可下载数据:
|
@@ -52,12 +34,15 @@ python download_data.py
|
|
52 |
`./data` 的结构应该如下所示:
|
53 |
|
54 |
```
|
55 |
-
./data
|
56 |
├── images # 包含图片的文件夹
|
57 |
-
├── problem_v1.
|
58 |
├── knowledge_v1.2.2_20240212_release.json # MULTI-Extend
|
59 |
-
├── hard_list_v1.
|
60 |
-
|
|
|
|
|
|
|
61 |
```
|
62 |
|
63 |
## 📝 如何评测
|
@@ -90,13 +75,13 @@ pip install tiktoken tqdm
|
|
90 |
|
91 |
```shell
|
92 |
python eval.py \
|
93 |
-
--problem_file ../data/
|
94 |
-
--knowledge_file ../data/
|
95 |
--questions_type 0,1,2,3 \
|
96 |
--image_type 0,1,2 \
|
97 |
--input_type 2 \
|
98 |
-
--model gpt-
|
99 |
-
--model_version gpt-
|
100 |
--api_key sk-************************************************
|
101 |
```
|
102 |
|
|
|
22 |
|
23 |
在多模态大型语言模型(MLLMs)迅速进步的背景下,提出具有挑战性且符合现实场景的基准测试变得尤为重要,而现有的基准测试主要关注于理解简单的自然图像和短文本。在本文中,我们介绍了***MULTI***,作为一个前沿的基准测试,用于评测MLLMs在理解复杂的表格和图像、以及进行长文本推理的能力。**MULTI**提供多模态输入,并要求回答是精确的或开放式的,反映了现实生活中的考试风格。**MULTI**包括超过 18,000 个问题,挑战MLLMs进行多种任务,从公式推导到图像细节分析和跨模态推理。我们还引入了***MULTI-Elite***,一个精心挑选的包含500个问题的难题子集,以及***MULTI-Extend***,包含超过 4,500 个外部知识上下文。我们的评测显示了MLLMs进步的巨大潜力,GPT-4V在**MULTI**上的准确率达到了 **63.7%**,而其他MLLMs的得分介于 **28.5%** 和 **55.3%** 之间。**MULTI**不仅作为一个稳健的评测平台,也为专家级AI的发展指明了道路。
|
24 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
25 |
## ⏬ 下载
|
26 |
|
27 |
您只需使用以下命令即可下载数据:
|
|
|
34 |
`./data` 的结构应该如下所示:
|
35 |
|
36 |
```
|
37 |
+
./data
|
38 |
├── images # 包含图片的文件夹
|
39 |
+
├── problem_v1.3.1_20241210_release.json # MULTI
|
40 |
├── knowledge_v1.2.2_20240212_release.json # MULTI-Extend
|
41 |
+
├── hard_list_v1.3.0_20241203.json # MULTI-Elite
|
42 |
+
├── captions_v1.3.1_20241210_blip.csv # 由BLIP-6.7b生成的图片描述
|
43 |
+
├── captions_v1.3.1_20241210_points.csv # 由POINTS-1-5生成的图片描述
|
44 |
+
├── ocr_v1.3.1_20241210_easyocr.csv # 由EasyOCR生成的OCR数据
|
45 |
+
└── ocr_v1.3.1_20241210_points.csv # 由POINTS-1-5生成的OCR数据
|
46 |
```
|
47 |
|
48 |
## 📝 如何评测
|
|
|
75 |
|
76 |
```shell
|
77 |
python eval.py \
|
78 |
+
--problem_file ../data/problem_{version}.json \
|
79 |
+
--knowledge_file ../data/knowledge_{version}.json \
|
80 |
--questions_type 0,1,2,3 \
|
81 |
--image_type 0,1,2 \
|
82 |
--input_type 2 \
|
83 |
+
--model gpt-4o \
|
84 |
+
--model_version gpt-4o-latest \
|
85 |
--api_key sk-************************************************
|
86 |
```
|
87 |
|