File size: 7,588 Bytes
a440104
 
 
 
 
 
96d6467
a6c6318
 
 
 
 
 
49a741c
 
 
 
 
 
 
 
 
 
 
 
bf45eae
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5d67513
 
 
 
 
 
041478b
5d67513
 
 
 
 
041478b
5d67513
 
 
 
 
a6c6318
 
 
 
a440104
a8997a8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a440104
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
---
task_categories:
- question-answering
language:
- tr
pretty_name: TurkishMMLU
configs:
  - config_name: Biology
    data_files:
    - split: dev
      path: "dev/TurkishMMLU_Biology.json"
    - split: test
      path: "test/TurkishMMLU_Biology.json"
  - config_name: Geography
    data_files:
    - split: dev
      path: "dev/TurkishMMLU_Geography.json"
    - split: test
      path: "test/TurkishMMLU_Geography.json"
  - config_name: Chemistry
    data_files:
    - split: dev
      path: "dev/TurkishMMLU_Chemistry.json"
    - split: test
      path: "test/TurkishMMLU_Chemistry.json"
  - config_name: History
    data_files:
    - split: dev
      path: "dev/TurkishMMLU_History.json"
    - split: test
      path: "test/TurkishMMLU_History.json"
  - config_name: Mathematics
    data_files:
    - split: dev
      path: "dev/TurkishMMLU_Mathematics.json"
    - split: test
      path: "test/TurkishMMLU_Mathematics.json"
  - config_name: Philosophy
    data_files:
    - split: dev
      path: "dev/TurkishMMLU_Philosophy.json"
    - split: test
      path: "test/TurkishMMLU_Philosophy.json"
  - config_name: Physics
    data_files:
    - split: dev
      path: "dev/TurkishMMLU_Physics.json"
    - split: test
      path: "test/TurkishMMLU_Physics.json"
  - config_name: Religion_and_Ethics
    data_files:
    - split: dev
      path: "dev/TurkishMMLU_Religion and Ethics.json"
    - split: test
      path: "test/TurkishMMLU_Religion and Ethics.json"
  - config_name: Turkish_Language_and_Literature
    data_files:
    - split: dev
      path: "dev/TurkishMMLU_Turkish Language and Literature.json"
    - split: test
      path: "test/TurkishMMLU_Turkish Language and Literature.json"
  - config_name: All
    data_files:
    - split: test
      path: "turkishmmlu_sub.json"
---
# TurkishMMLU

This repository contains Code and Data Analysis of TurkishMMLU for ACL 24 SIGTURK Workshop. TurkishMMLU is a multiple-choice dataset for Turkish Natural Language Processing (NLP) community based on Turkish Highschool Curricula for nine subjects.

To access this dataset please send an email to:
[email protected] or [email protected].

## Abstract

Multiple choice question answering tasks evaluate the reasoning, comprehension, and mathematical abilities of Large Language Models (LLMs). While existing benchmarks employ automatic translation for multilingual evaluation, this approach is error-prone and potentially introduces culturally biased questions, especially in social sciences. We introduce the first multitask, multiple-choice Turkish QA benchmark, TurkishMMLU, to evaluate LLMs' understanding of the Turkish language. TurkishMMLU includes over 10,000 questions, covering 9 different subjects from Turkish high-school education curricula. These questions are written by curriculum experts, suitable for the high-school curricula in Turkey, covering subjects ranging from natural sciences and math questions to more culturally representative topics such as Turkish Literature and the history of the Turkish Republic. We evaluate over 20 LLMs, including multilingual open-source (e.g., Gemma, Llama, MT5), closed-source (GPT 4o, Claude, Gemini), and Turkish-adapted (e.g., Trendyol) models. We provide an extensive evaluation, including zero-shot and few-shot evaluation of LLMs, chain-of-thought reasoning, and question difficulty analysis along with model performance. We provide an in-depth analysis of the Turkish capabilities and limitations of current LLMs to provide insights for future LLMs for the Turkish language. We publicly release our code for the dataset and evaluation

## Dataset


Dataset is divided into four categories Natural Sciences, Mathematics, Language, and Social Sciences and Humanities with a total of nine subjects in Turkish highschool education. It is available in multiple choice for LLM evaluation. The questions also contain difficulty indicator referred as Correctness ratio.

## Evaluation

5-Shot evaluation results from the paper includes open and closed source SOTA LLM with different architectures. For this study, multilingual and Turkish adapted models are tested.

| Model               | Source | Average | Natural Sciences | Math | Turkish L & L | Social Sciences and Humanities |
| ------------------- | ------ | ------- | ---------------- | ---- | ------------- | ------------------------------ |
| GPT 4o              | Closed | 83.1    | 75.3             | 59.0 | 82.0          | 95.3                           |
| Claude-3 Opus       | Closed | 79.1    | 71.7             | 59.0 | 77.0          | 90.3                           |
| GPT 4-turbo         | Closed | 75.7    | 70.3             | 57.0 | 67.0          | 86.5                           |
| Llama-3 70B-IT      | Closed | 67.3    | 56.7             | 42.0 | 57.0          | 84.3                           |
| Claude-3 Sonnet     | Closed | 67.3    | 67.3             | 44.0 | 58.0          | 75.5                           |
| Llama-3 70B         | Open   | 66.1    | 56.0             | 37.0 | 57.0          | 83.3                           |
| Claude-3 Haiku      | Closed | 65.4    | 57.0             | 40.0 | 61.0          | 79.3                           |
| Gemini 1.0-pro      | Closed | 63.2    | 52.7             | 29.0 | 63.0          | 79.8                           |
| C4AI Command-r+     | Open   | 60.6    | 50.0             | 26.0 | 57.0          | 78.0                           |
| Aya-23 35B          | Open   | 55.6    | 43.3             | 31.0 | 49.0          | 72.5                           |
| C4AI Command-r      | Open   | 54.9    | 44.7             | 29.0 | 49.0          | 70.5                           |
| Mixtral 8x22B       | Open   | 54.8    | 45.3             | 27.0 | 49.0          | 70.3                           |
| GPT 3.5-turbo       | Closed | 51.0    | 42.7             | 39.0 | 35.0          | 61.8                           |
| Llama-3 8B-IT       | Open   | 46.4    | 36.7             | 29.0 | 39.0          | 60.0                           |
| Llama-3 8B          | Open   | 46.2    | 37.3             | 30.0 | 33.0          | 60.3                           |
| Mixtral 8x7B-IT     | Open   | 45.2    | 41.3             | 28.0 | 39.0          | 54.0                           |
| Aya-23 8B           | Open   | 45.0    | 39.0             | 23.0 | 31.0          | 58.5                           |
| Gemma 7B            | Open   | 43.6    | 34.3             | 22.0 | 47.0          | 55.0                           |
| Aya-101             | Open   | 40.7    | 31.3             | 24.0 | 38.0          | 55.0                           |
| Trendyol-LLM 7B-C-D | Open   | 34.1    | 30.3             | 22.0 | 28.0          | 41.5                           |
| mT0-xxl             | Open   | 33.9    | 29.3             | 28.0 | 21.0          | 42.0                           |
| Mistral 7B-IT       | Open   | 32.0    | 34.3             | 26.0 | 38.0          | 30.3                           |
| Llama-2 7B          | Open   | 22.3    | 25.3             | 20.0 | 20.0          | 19.8                           |
| mT5-xxl             | Open   | 18.1    | 19.3             | 24.0 | 14.0          | 16.8                           |

## Citation

```
@misc{yüksel2024turkishmmlumeasuringmassivemultitask,
title={TurkishMMLU: Measuring Massive Multitask Language Understanding in Turkish},
author={Arda Yüksel and Abdullatif Köksal and Lütfi Kerem Şenel and Anna Korhonen and Hinrich Schütze},
year={2024},
eprint={2407.12402},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2407.12402},
}
```