|
---
|
|
license: apache-2.0
|
|
language:
|
|
- en
|
|
tags:
|
|
- vlm
|
|
- reasoning
|
|
- multimodal
|
|
- nli
|
|
size_categories:
|
|
- n<1K
|
|
task_categories:
|
|
- visual-question-answering
|
|
---
|
|
|
|
# **NL-Eye Benchmark**
|
|
|
|
Will a Visual Language Model (VLM)-based bot warn us about slipping if it detects a wet floor?
|
|
Recent VLMs have demonstrated impressive capabilities, yet their ability to infer outcomes and causes remains underexplored. To address this, we introduce **NL-Eye**, a benchmark designed to assess VLMs' **visual abductive reasoning skills**.
|
|
NL-Eye adapts the **abductive Natural Language Inference (NLI)** task to the visual domain, requiring models to evaluate the **plausibility of hypothesis images** based on a premise image and explain their decisions. The dataset contains **350 carefully curated triplet examples** (1,050 images) spanning diverse reasoning categories, temporal categories and domains.
|
|
NL-Eye represents a crucial step toward developing **VLMs capable of robust multimodal reasoning** for real-world applications, such as accident-prevention bots and generated video verification.
|
|
|
|
project page: [NL-Eye project page](https://venturamor.github.io/NLEye/)
|
|
|
|
preprint: [NL-Eye arxiv](https://arxiv.org/abs/2410.02613)
|
|
|
|
---
|
|
|
|
## **Dataset Structure**
|
|
The dataset contains:
|
|
- A **CSV file** with annotations (`test_set.csv`).
|
|
- An **images directory** with subdirectories for each sample (`images/`).
|
|
|
|
### **CSV Fields:**
|
|
| Field | Type | Description |
|
|
|--------------------------------|----------|----------------------------------------------------------------|
|
|
| `sample_id` | `int` | Unique identifier for each sample. |
|
|
| `reasoning_category` | `string` | One of the six reasoning categories (physical, functional, logical, emotional, cultural, or social). |
|
|
| `domain` | `string` | One of the ten domain categories (e.g., education, technology). |
|
|
| `time_direction` | `string` | One of three directions (e.g., forward, backward, parallel). |
|
|
| `time_duration` | `string` | One of three durations (e.g., short, long, parallel). |
|
|
| `premise_description` | `string` | Description of the premise. |
|
|
| `plausible_hypothesis_description` | `string` | Description of the plausible hypothesis. |
|
|
| `implausible_hypothesis_description` | `string` | Description of the implausible hypothesis. |
|
|
| `gold_explanation` | `string` | The gold explanation for the sample's plausibility. |
|
|
| `additional_valid_human_explanations` | `string` (optional) | Extra human-generated (crowd-workers) explanations for explanation diversity. |
|
|
|
|
> **Note**: Not all samples contain `additional_valid_human_explanations`.
|
|
|
|
---
|
|
|
|
### **Images Directory Structure:**
|
|
The `images/` directory contains **subdirectories named after each `sample_id`**. Each subdirectory includes:
|
|
- **`premise.png`**: Image showing the premise.
|
|
- **`hypothesis1.png`**: Plausible hypothesis.
|
|
- **`hypothesis2.png`**: Implausible hypothesis.
|
|
|
|
## **Usage**
|
|
This dataset is **only for test purposes**.
|
|
|
|
### Citation
|
|
```bibtex
|
|
@misc{ventura2024nleye,
|
|
title={NL-Eye: Abductive NLI for Images},
|
|
author={Mor Ventura and Michael Toker and Nitay Calderon and Zorik Gekhman and Yonatan Bitton and Roi Reichart},
|
|
year={2024},
|
|
eprint={2410.02613},
|
|
archivePrefix={arXiv},
|
|
primaryClass={cs.CV}
|
|
} |
|
|