IRFL / README.md
lampent's picture
Update README.md
16da078
|
raw
history blame
4.93 kB
metadata
configs:
  - config_name: Idioms Detection Task
    data_files:
      - split: test
        path: idiom_understanding_task.csv
  - config_name: Metaphors Detection Task
    data_files:
      - split: test
        path: metaphor_understanding_task.csv
license: cc-by-4.0
language:
  - en
tags:
  - figurative-language
  - multimodal-figurative-language
  - ' commonsense-reasoning'
  - visual-reasoning
size_categories:
  - 1K<n<10K

Dataset Card for IRFL

Dataset Description

The IRFL dataset consists of idioms, similes, metaphors with matching figurative and literal images, and two novel tasks of multimodal figurative detection and retrieval.

Using human annotation and an automatic pipeline we created, we collected figurative and literal images for textual idioms, metaphors, and similes. We annotated the relations between these images and the figurative phrase they originated from. We created two novel tasks of figurative detection and retrieval using these images.

The figurative detection task evaluates Vision and Language Pre-Trained Models’ (VL-PTMs) ability to choose the image that best visualizes the meaning of a figurative expression. The task is to choose the image that best visualizes the figurative phrase out of X candidates. The retrieval task examines VL-PTMs' preference for figurative images. In this task, Given a set of figurative and partially literal images, the task is to rank the images using the model-matching score such that the figurative images are ranked higher, and calculate the precision at k, where k is the number of figurative images in the input.

We evaluated state-of-the-art VL models and found that the best models achieved 22%, 30%, and 66% accuracy vs. humans 97%, 99.7%, and 100% on our detection task for idioms, metaphors, and similes respectively. The best model achieved an F1 score of 61 on the retrieval task.

Leaderboards

https://irfl-dataset.github.io/leaderboard

Colab notebook code for IRFL evaluation

https://colab.research.google.com/drive/1zbW7R8Cn9sXICV3x_FGKjKIKu8GCrCme?usp=sharing

Languages

English.

Dataset Structure

Data Fields

★ - refers to idiom-only fields

Detection task

  • query (★): the idiom definition the answer image originated from.
  • distractors: the distractor images
  • answer: the correct image
  • figurative_type: idiom | metaphor | simile
  • images_metadata: the metadata of the distractors and asnwer images.
  • type: the correct image type (Figurative or Figurative Literal).
  • definition (★): list of all the definitions of the idiom
  • phrase: the figurative phrase.

Retrieval task

  • type: the rival categories FvsPO (Figurative images vs. Partial Objects) or FLvsPO (Figurative Literal images vs. Partial Objects)
  • figurative_type: idiom | metaphor | simile
  • first_category: the first category images (Figurative images if FvsPO, Figurative Literal images if FLvsPO)
  • second_category: the second category images (Partial Objects)
  • definition (★): list of all the definitions of the idiom
  • phrase: the figurative phrase.

The idioms, metaphor, and similes datasets contain all the figurative phrases, annotated images, and corresponding metadata.

Dataset Collection

We collected figurative and literal images for textual idioms, metaphors, and similes using an automatic pipeline we created. We annotated the relations between these images and the figurative phrase they originated from.

Annotation process

We paid Amazon Mechanical Turk Workers to annotate the relation between each image and phrase (Figurative vs. Literal).

Considerations for Using the Data

  • Idioms: Annotated by crowdworkers with rigorous qualifications and training.
  • Metaphors and Similes: Annotated by expert team members.
  • Detection and Ranking Tasks: Annotated by crowdworkers not involved in prior IRFL annotations.

Licensing Information

CC-By 4.0

Citation Information

@misc{yosef2023irfl, title={IRFL: Image Recognition of Figurative Language}, author={Ron Yosef and Yonatan Bitton and Dafna Shahaf}, year={2023}, eprint={2303.15445}, archivePrefix={arXiv}, primaryClass={cs.CL} }