File size: 4,950 Bytes
f0d43ee 9295022 aeeb7e5 76ada36 15d0b13 aeeb7e5 76ada36 15d0b13 aeeb7e5 76ada36 15d0b13 aeeb7e5 76ada36 15d0b13 aeeb7e5 76ada36 15d0b13 aeeb7e5 76ada36 15d0b13 aeeb7e5 76ada36 15d0b13 aeeb7e5 76ada36 15d0b13 aeeb7e5 76ada36 15d0b13 aeeb7e5 76ada36 15d0b13 aeeb7e5 76ada36 15d0b13 aeeb7e5 76ada36 15d0b13 aeeb7e5 76ada36 15d0b13 aeeb7e5 76ada36 15d0b13 aeeb7e5 76ada36 15d0b13 aeeb7e5 76ada36 15d0b13 aeeb7e5 76ada36 15d0b13 aeeb7e5 76ada36 15d0b13 aeeb7e5 76ada36 15d0b13 aeeb7e5 76ada36 15d0b13 aeeb7e5 76ada36 15d0b13 a899404 23678a2 a899404 23678a2 a899404 f009860 a899404 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 |
---
license: apache-2.0
task_categories:
- question-answering
- summarization
- text-generation
language:
- en
size_categories:
- 1K<n<10K
configs:
- config_name: L-CiteEval-Data_narrativeqa
data_files:
- split: test
path: "L-CiteEval-Data/narrativeqa.json"
- config_name: L-CiteEval-Data_natural_questions
data_files:
- split: test
path: "L-CiteEval-Data/natural_questions.json"
- config_name: L-CiteEval-Data_hotpotqa
data_files:
- split: test
path: "L-CiteEval-Data/hotpotqa.json"
- config_name: L-CiteEval-Data_2wikimultihopqa
data_files:
- split: test
path: "L-CiteEval-Data/2wikimultihopqa.json"
- config_name: L-CiteEval-Data_gov_report
data_files:
- split: test
path: "L-CiteEval-Data/gov_report.json"
- config_name: L-CiteEval-Data_multi_news
data_files:
- split: test
path: "L-CiteEval-Data/multi_news.json"
- config_name: L-CiteEval-Data_qmsum
data_files:
- split: test
path: "L-CiteEval-Data/qmsum.json"
- config_name: L-CiteEval-Data_locomo
data_files:
- split: test
path: "L-CiteEval-Data/locomo.json"
- config_name: L-CiteEval-Data_dialsim
data_files:
- split: test
path: "L-CiteEval-Data/dialsim.json"
- config_name: L-CiteEval-Data_niah
data_files:
- split: test
path: "L-CiteEval-Data/niah.json"
- config_name: L-CiteEval-Data_counting_stars
data_files:
- split: test
path: "L-CiteEval-Data/counting_stars.json"
- config_name: L-CiteEval-Length_narrativeqa
data_files:
- split: test
path: "L-CiteEval-Length/narrativeqa.json"
- config_name: L-CiteEval-Length_hotpotqa
data_files:
- split: test
path: "L-CiteEval-Length/hotpotqa.json"
- config_name: L-CiteEval-Length_gov_report
data_files:
- split: test
path: "L-CiteEval-Length/gov_report.json"
- config_name: L-CiteEval-Length_locomo
data_files:
- split: test
path: "L-CiteEval-Length/locomo.json"
- config_name: L-CiteEval-Length_counting_stars
data_files:
- split: test
path: "L-CiteEval-Length/counting_stars.json"
- config_name: L-CiteEval-Hardness_narrativeqa
data_files:
- split: test
path: "L-CiteEval-Hardness/narrativeqa.json"
- config_name: L-CiteEval-Hardness_hotpotqa
data_files:
- split: test
path: "L-CiteEval-Hardness/hotpotqa.json"
- config_name: L-CiteEval-Hardness_gov_report
data_files:
- split: test
path: "L-CiteEval-Hardness/gov_report.json"
- config_name: L-CiteEval-Hardness_locomo
data_files:
- split: test
path: "L-CiteEval-Hardness/locomo.json"
- config_name: L-CiteEval-Hardness_counting_stars
data_files:
- split: test
path: "L-CiteEval-Hardness/counting_stars.json"
---
# L-CITEEVAL: DO LONG-CONTEXT MODELS TRULY LEVERAGE CONTEXT FOR RESPONDING?
[Paper](https://arxiv.org/abs/2410.02115)   [Github](https://github.com/ZetangForward/L-CITEEVAL)   [Zhihu](https://zhuanlan.zhihu.com/p/817442176)
## Benchmark Quickview
*L-CiteEval* is a multi-task long-context understanding with citation benchmark, covering **5 task categories**, including single-document question answering, multi-document question answering, summarization, dialogue understanding, and synthetic tasks, encompassing **11 different long-context tasks**. The context lengths for these tasks range from **8K to 48K**.

## Data Prepare
#### Load Data
```
from datasets import load_dataset
datasets = ["narrativeqa", "natural_questions", "hotpotqa", "2wikimultihopqa", "goc_report", "multi_news", "qmsum", "locomo", "dialsim", "counting_stars", "niah"]
for dataset in datasets:
### Load L-CiteEval
data = load_dataset('Jonaszky123/L-CiteEval', f"L-CiteEval-Data_{dataset}")
### Load L-CiteEval-Length
data = load_dataset('Jonaszky123/L-CiteEval', f"L-CiteEval-Length_{dataset}")
### Load L-CiteEval-Hardness
data = load_dataset('Jonaszky123/L-CiteEval', f"L-CiteEval-Hardness_{dataset}")
```
<!-- You can get the L-CiteEval data from [🤗 Hugging face](). Once downloaded, place the data in the dataset folder. -->
All data in L-CiteEval follows the format below:
```
{
"id": "The identifier for the data entry",
"question": "The task question, such as for single-document QA. In summarization tasks, this may be omitted",
"answer": "The correct or expected answer to the question, used for evaluating correctness",
"docs": "Context divided into fixed-length chunks"
"length": "The context length"
"hardness": "The level of difficulty in L-CiteEval-Hardness, which can be easy, medium and hard"
}
```
You can find evaluation code in our github.
## Citation
If you find our work helpful, please cite our paper:
```
@misc{tang2024lciteeval,
title={L-CiteEval: Do Long-Context Models Truly Leverage Context for Responding?},
author={Zecheng Tang and Keyan Zhou and Juntao Li and Baibei Ji and Jianye Hou and Min Zhang},
year={2024},
eprint={2410.02115},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|