MorVentura commited on
Commit
afeea39
·
verified ·
1 Parent(s): b3c1d81

citations and links

Browse files
Files changed (1) hide show
  1. README.md +16 -0
README.md CHANGED
@@ -20,6 +20,10 @@ Recent VLMs have demonstrated impressive capabilities, yet their ability to infe
20
  NL-Eye adapts the **abductive Natural Language Inference (NLI)** task to the visual domain, requiring models to evaluate the **plausibility of hypothesis images** based on a premise image and explain their decisions. The dataset contains **350 carefully curated triplet examples** (1,050 images) spanning diverse reasoning categories, temporal categories and domains.
21
  NL-Eye represents a crucial step toward developing **VLMs capable of robust multimodal reasoning** for real-world applications, such as accident-prevention bots and generated video verification.
22
 
 
 
 
 
23
  ---
24
 
25
  ## **Dataset Structure**
@@ -53,3 +57,15 @@ The `images/` directory contains **subdirectories named after each `sample_id`**
53
 
54
  ## **Usage**
55
  This dataset is **only for test purposes**.
 
 
 
 
 
 
 
 
 
 
 
 
 
20
  NL-Eye adapts the **abductive Natural Language Inference (NLI)** task to the visual domain, requiring models to evaluate the **plausibility of hypothesis images** based on a premise image and explain their decisions. The dataset contains **350 carefully curated triplet examples** (1,050 images) spanning diverse reasoning categories, temporal categories and domains.
21
  NL-Eye represents a crucial step toward developing **VLMs capable of robust multimodal reasoning** for real-world applications, such as accident-prevention bots and generated video verification.
22
 
23
+ project page: [NL-Eye project page](https://venturamor.github.io/NLEye/)
24
+
25
+ preprint: [NL-Eye arxiv](https://arxiv.org/abs/2410.02613)
26
+
27
  ---
28
 
29
  ## **Dataset Structure**
 
57
 
58
  ## **Usage**
59
  This dataset is **only for test purposes**.
60
+
61
+ ### Citation
62
+ ```bibtex
63
+ @misc{ventura2024nleye,
64
+ title={NL-Eye: Abductive NLI for Images},
65
+ author={Mor Ventura and Michael Toker and Nitay Calderon and Zorik Gekhman and Yonatan Bitton and Roi Reichart},
66
+ year={2024},
67
+ eprint={2410.02613},
68
+ archivePrefix={arXiv},
69
+ primaryClass={cs.CV}
70
+ }
71
+