Datasets:

Languages:
Chinese
ArXiv:
License:
File size: 9,117 Bytes
eae31b9
 
 
 
7a614a3
 
 
 
 
 
 
 
b6d2702
7a614a3
cd1bc51
 
7a614a3
 
 
 
 
 
 
cd1bc51
 
b6d2702
 
 
 
 
7a614a3
 
 
cd1bc51
 
 
 
 
7a614a3
 
 
 
776fc7a
7a614a3
776fc7a
 
 
7a614a3
 
776fc7a
7a614a3
 
776fc7a
7a614a3
f315363
7a614a3
f315363
 
 
 
 
7a614a3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f315363
7a614a3
 
 
f315363
 
7a614a3
 
 
f315363
 
7a614a3
 
 
 
 
 
 
f315363
 
 
7a614a3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b6d2702
7a614a3
b6d2702
7a614a3
b6d2702
7a614a3
b6d2702
7a614a3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
---
license: mit
language:
- zh
pretty_name: MULTI-Benchmark
viewer: False
---

# ๐Ÿ–ผ๏ธ MULTI-Benchmark: Multimodal Understanding Leaderboard with Text and Images

<div align="center">

![MULTI](./docs/static/images/overview.png)

๐ŸŒ [Website](https://OpenDFM.github.io/MULTI-Benchmark/) | ๐Ÿ“ƒ [Paper](https://arxiv.org/abs/2402.03173/) | ๐Ÿค— [Dataset](https://huggingface.co/datasets/OpenDFM/MULTI-Benchmark) |
๐Ÿ† [Leaderboard](https://opendfm.github.io/MULTI-Benchmark/#leaderboard) | ๐Ÿ“ฎ [Submit](https://opendfm.github.io/MULTI-Benchmark/static/pages/submit.html)

[็ฎ€ไฝ“ไธญๆ–‡](./README_zh.md) | English

</div>

## ๐Ÿ”ฅ News

- **[2025.1.7]** We have updated our [leaderboard](https://opendfm.github.io/MULTI-Benchmark/#leaderboard) with the latest results.
- **[2025.1.2]** We have updated MULTI to v1.3.1.
- **[2024.3.4]** We have released the [evaluation page](https://OpenDFM.github.io/MULTI-Benchmark/static/pages/submit.html).
- **[2024.2.19]** We have released the [HuggingFace Page](https://huggingface.co/datasets/OpenDFM/MULTI-Benchmark/).
- **[2024.2.6]** We have published our [paper](https://arxiv.org/abs/2402.03173/) on arXiv.
- **[2023.12.7]** We have released the [code](https://github.com/OpenDFM/MULTI-Benchmark/tree/main/eval) of our benchmark evaluation.
- **[2023.12.5]** We have released the [GitHub Page](https://OpenDFM.github.io/MULTI-Benchmark/).

## ๐Ÿ“– Overview

The rapid development of multimodal large language models (MLLMs) raises the question of how they compare to human performance. While existing datasets often feature synthetic or
overly simplistic tasks, some models have already surpassed human expert baselines. In this paper, we present **MULTI**, a Chinese multimodal dataset derived from authentic examination
questions. Comprising over 18,000 carefully selected and refined questions, **MULTI** evaluates models using real-world examination standards, encompassing image-text comprehension,
complex reasoning, and knowledge recall. Additionally, We also introduce **MULTI-Elite**, a 500-question selected hard subset, and **MULTI-Extend** with more than 4,500 external knowledge
context pieces for testing in-context learning capabilities. **MULTI** serves not only as a robust evaluation platform but also paves the way for the development of expert-level AI.


## โฌ Download

You can simply download data using the following command:

```shell
cd eval
python download_data.py
```

The structure of `./data` should be something like:

```
./data
โ”œโ”€โ”€ images                                       # folder containing images
โ”œโ”€โ”€ problem_v1.3.1_20241210_release.json         # MULTI
โ”œโ”€โ”€ knowledge_v1.2.2_20240212_release.json       # MULTI-Extend
โ”œโ”€โ”€ hard_list_v1.3.0_20241203.json               # MULTI-Elite
โ”œโ”€โ”€ captions_v1.3.1_20241210_blip.csv            # image captions generated by BLIP-6.7B
โ”œโ”€โ”€ captions_v1.3.1_20241210_points.csv          # image captions generated by POINTS-1-5
โ”œโ”€โ”€ ocr_v1.3.1_20241210_easyocr.csv              # OCR data generated by EasyOCR
โ””โ”€โ”€ ocr_v1.3.1_20241210_points.csv               # OCR data generated by POINTS-1-5
```

## ๐Ÿ“ How to Evaluate

We provide a unified evaluation framework in `eval`. Each file in `eval/models` contains an evaluator specified to one M/LLM, and implements a `generate_answer` method to receive a question as input and give out the answer of it.

```shell
cd eval
python eval.py -h # to list all supported arguments
python eval.py -l # to list all supported models
```

### Environment Preparation Before Usage

Each evaluator requires its unique environment setting, and a universal environment may not work for all evaluators. **Just follow the official guide.** If the corresponding model runs well, then so should it fit in our framework.

You just need to install another two packages to run the evaluation code:

```shell
pip install tiktoken tqdm
```

If you just want to generate data for a specific setting (using `--debug` argument), this line above is all you need.

### Running Evaluation

For a quick start, see these examples:

Test GPT-4o model on whole MULTI with multimodal input, using MULTI-Extend as external knowledge:

```shell
python eval.py \
  --problem_file ../data/problem_{version}.json \
  --knowledge_file ../data/knowledge_{version}.json \
  --questions_type 0,1,2,3 \
  --image_type 0,1,2 \
  --input_type 2 \
  --model gpt-4o \
  --model_version gpt-4o-latest \
  --api_key sk-************************************************
```

Test Qwen-VL model on MULTI-Elite with image caption input, skip all questions not containing images, evaluate only multiple-choice questions, automatically set cuda device:

```shell
python eval.py \
  --problem_file ../data/problem_{version}.json \
  --subset ../data/hard_list_{version}.json \
  --caption_file ../data/captions_{version}.csv \
  --questions_type 0,1 \
  --image_type 1,2 \
  --input_type 1 \
  --model qwen-vl \
  --model_dir ../models/Qwen-VL-Chat
```

The evaluation script will generate a folder named `results` under the root directory, and the result will be saved in `../results/EXPERIMENT_NAME`. During the evaluation, the script will save checkpoints in `../results/EXPERIMENT_NAME/checkpoints`, you can delete them after the evaluation is done. If the evaluation is interrupted, you can continue from the last checkpoint:

```shell
python eval.py \
  --checkpoint_dir ../results/EXPERIMENT_NAME
```

Most of the arguments are saved in `../results/EXPERIMENT_NAME/args.json`, so you can continue the evaluation without specifying all the arguments again. Please note that `--api_key` is not saved in `args.json` for security reasons, so you need to specify it again.

```shell
python eval.py \
  --checkpoint_dir ../results/EXPERIMENT_NAME \
  --api_key sk-************************************************
```

For more details of arguments, please use `python eval.py -h`, and refer to `args.py` and `eval.py`.

### Add Support for Your Models

It's recommended to read the code of the other given evaluators in `eval/models` before your implementation.  

Create `class YourModelEvaluator` and implement `generate_answer(self, question:dict)` to match the design supported in `eval.py` and `eval.sh`, which is anticipated to largely ease the coding process.

**Do not forget to add their references into `args.py` for the convenience of usage.** 

You can execute `model_tester.py` in the `eval` folder to check the correctness of you implementation. Various problems including implementation errors, small bugs in code, and even wrong environment settings may cause failure of the evaluation. The examples provided in the file cover most kinds of cases presented in our benchmark. Feel free to change the code in it to debug your code๐Ÿ˜Š

```shell
python model_tester.py <args> # args are similar to the default settings above
```

### Create Captions and OCR Data for Images

Generate captions or OCR data for images, and save them in csv with format below:

```
../data/images/czls/502_1.png,a cartoon drawing of a man standing in front of a large block
../data/images/czls/525_1.png,a chinese newspaper with the headline, china's new year
...
```

We provide two example scripts to generate captions (`image_caption.py`) and OCR data  (`image_ocr.py`) for images.

## ๐Ÿ“ฎ How to Submit

You need to first prepare a UTF-8 encoded JSON file with the following format:

```
{
    "czsx_0_0": {
        "question_id": "czsx_0_0",
        "question_image_number": 1,
        "image_list": [...],            # optional
        "input_message": ...,           # optional
        "prediction": "C"
    },
    ...
}
```

If you evaluate the model with our official code, you can simply zip the prediction file `prediction.json` and the configuration file `args.json` in the experiment results folder `. /results/EXPERIMENT_NAME` in `.zip` format.

Then, you can submit your result to our [evaluation page](https://opendfm.github.io/MULTI-Benchmark/static/pages/submit.html).

You are also welcomed to pull a request and contribute your code to our evaluation code. We will be very grateful for your contribution!

**[Notice]** Thank you for being so interested in the **MULTI** dataset! If you want to add your model in our leaderboard, please fill in [this questionnaire](https://wj.sjtu.edu.cn/q/89UmRAJn), your information will be kept strictly confidential, so please feel free to fill it out. ๐Ÿค—


## ๐Ÿ“‘ Citation

If you find our work useful, please cite us!

```
@misc{zhu2024multi,
      title={{MULTI}: Multimodal Understanding Leaderboard with Text and Images}, 
      author={Zichen Zhu and Yang Xu and Lu Chen and Jingkai Yang and Yichuan Ma and Yiming Sun and Hailin Wen and Jiaqi Liu and Jinyu Cai and Yingzi Ma and Situo Zhang and Zihan Zhao and Liangtai Sun and Kai Yu},
      year={2024},
      eprint={2402.03173},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
```

## ๐Ÿ“ง Contact Us

If you have any questions, please feel free to contact us via email `[email protected]` and `[email protected]`