MorVentura commited on
Commit
07453a5
·
1 Parent(s): 92ffd16

Committing local changes before pull

Browse files
Files changed (1) hide show
  1. README.md +68 -0
README.md CHANGED
@@ -1,3 +1,71 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ language:
4
+ - en
5
+ tags:
6
+ - vlm
7
+ - reasoning
8
+ - multimodal
9
+ - nli
10
+ size_categories:
11
+ - n<1K
12
+ task_categories:
13
+ - visual-question-answering
14
  ---
15
+
16
+ # **NL-Eye Benchmark**
17
+
18
+ Will a Visual Language Model (VLM)-based bot warn us about slipping if it detects a wet floor?
19
+ Recent VLMs have demonstrated impressive capabilities, yet their ability to infer outcomes and causes remains underexplored. To address this, we introduce **NL-Eye**, a benchmark designed to assess VLMs' **visual abductive reasoning skills**.
20
+ NL-Eye adapts the **abductive Natural Language Inference (NLI)** task to the visual domain, requiring models to evaluate the **plausibility of hypothesis images** based on a premise image and explain their decisions. The dataset contains **350 carefully curated triplet examples** (1,050 images) spanning diverse reasoning categories, temporal categories and domains.
21
+ NL-Eye represents a crucial step toward developing **VLMs capable of robust multimodal reasoning** for real-world applications, such as accident-prevention bots and generated video verification.
22
+
23
+ project page: [NL-Eye project page](https://venturamor.github.io/NLEye/)
24
+
25
+ preprint: [NL-Eye arxiv](https://arxiv.org/abs/2410.02613)
26
+
27
+ ---
28
+
29
+ ## **Dataset Structure**
30
+ The dataset contains:
31
+ - A **CSV file** with annotations (`test_set.csv`).
32
+ - An **images directory** with subdirectories for each sample.
33
+
34
+ ### **CSV Fields:**
35
+ | Field | Type | Description |
36
+ |--------------------------------|----------|----------------------------------------------------------------|
37
+ | `sample_id` | `int` | Unique identifier for each sample. |
38
+ | `reasoning_category` | `string` | One of the six reasoning categories (physical, functional, logical, emotional, cultural, or social). |
39
+ | `domain` | `string` | One of the ten domain categories (e.g., education, technology). |
40
+ | `time_direction` | `string` | One of three directions (e.g., forward, backward, parallel). |
41
+ | `time_duration` | `string` | One of three durations (e.g., short, long, parallel). |
42
+ | `premise_description` | `string` | Description of the premise. |
43
+ | `plausible_hypothesis_description` | `string` | Description of the plausible hypothesis. |
44
+ | `implausible_hypothesis_description` | `string` | Description of the implausible hypothesis. |
45
+ | `gold_explanation` | `string` | The gold explanation for the sample's plausibility. |
46
+ | `additional_valid_human_explanations` | `string` (optional) | Extra human-generated (crowd-workers) explanations for explanation diversity. |
47
+
48
+ > **Note**: Not all samples contain `additional_valid_human_explanations`.
49
+
50
+ ---
51
+
52
+ ### **Images Directory Structure:**
53
+ The `images/` directory contains **subdirectories named after each `sample_id`**. Each subdirectory includes:
54
+ - **`premise.png`**: Image showing the premise.
55
+ - **`hypothesis1.png`**: Plausible hypothesis.
56
+ - **`hypothesis2.png`**: Implausible hypothesis.
57
+
58
+ ## **Usage**
59
+ This dataset is **only for test purposes**.
60
+
61
+ ### Citation
62
+ ```bibtex
63
+ @misc{ventura2024nleye,
64
+ title={NL-Eye: Abductive NLI for Images},
65
+ author={Mor Ventura and Michael Toker and Nitay Calderon and Zorik Gekhman and Yonatan Bitton and Roi Reichart},
66
+ year={2024},
67
+ eprint={2410.02613},
68
+ archivePrefix={arXiv},
69
+ primaryClass={cs.CV}
70
+ }
71
+