Update README.md
Browse files
README.md
CHANGED
@@ -1,29 +1,61 @@
|
|
1 |
-
---
|
2 |
-
license: cc-by-4.0
|
3 |
-
dataset_info:
|
4 |
-
features:
|
5 |
-
- name: mask
|
6 |
-
dtype: image
|
7 |
-
- name: target_img_dataset
|
8 |
-
dtype: string
|
9 |
-
- name: img_id
|
10 |
-
dtype: string
|
11 |
-
- name: ann_id
|
12 |
-
dtype: string
|
13 |
-
splits:
|
14 |
-
- name: train
|
15 |
-
num_bytes: 2555862476.36
|
16 |
-
num_examples: 888230
|
17 |
-
- name: test
|
18 |
-
num_bytes: 35729190.0
|
19 |
-
num_examples: 752
|
20 |
-
download_size: 681492456
|
21 |
-
dataset_size: 2591591666.36
|
22 |
-
configs:
|
23 |
-
- config_name: default
|
24 |
-
data_files:
|
25 |
-
- split: train
|
26 |
-
path: data/train-*
|
27 |
-
- split: test
|
28 |
-
path: data/test-*
|
29 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: cc-by-4.0
|
3 |
+
dataset_info:
|
4 |
+
features:
|
5 |
+
- name: mask
|
6 |
+
dtype: image
|
7 |
+
- name: target_img_dataset
|
8 |
+
dtype: string
|
9 |
+
- name: img_id
|
10 |
+
dtype: string
|
11 |
+
- name: ann_id
|
12 |
+
dtype: string
|
13 |
+
splits:
|
14 |
+
- name: train
|
15 |
+
num_bytes: 2555862476.36
|
16 |
+
num_examples: 888230
|
17 |
+
- name: test
|
18 |
+
num_bytes: 35729190.0
|
19 |
+
num_examples: 752
|
20 |
+
download_size: 681492456
|
21 |
+
dataset_size: 2591591666.36
|
22 |
+
configs:
|
23 |
+
- config_name: default
|
24 |
+
data_files:
|
25 |
+
- split: train
|
26 |
+
path: data/train-*
|
27 |
+
- split: test
|
28 |
+
path: data/test-*
|
29 |
+
---
|
30 |
+
|
31 |
+
# Dataset Card for PIPE Masks Dataset
|
32 |
+
|
33 |
+
## Dataset Summary
|
34 |
+
|
35 |
+
The PIPE (Paint by InPaint Edit) dataset is designed to enhance the efficacy of mask-free, instruction-following image editing models by providing a large-scale collection of image pairs and diverse object addition instructions. Comprising approximately 1 million image pairs, PIPE includes both source and target images, along with corresponding natural language instructions for object addition. The dataset leverages extensive image segmentation datasets (COCO, Open Images, LVIS) and employs a Stable Diffusion-based inpainting model to create pairs of images with and without objects. Additionally, it incorporates a variety of instruction generation techniques, including class name-based, VLM-LLM based, and manual reference-based instructions, resulting in nearly 1.9 million different instructions. We are also providing a test set for image addition evaluation.
|
36 |
+
Here, we provide the masks used for the inpainting process to generate the source image for the PIPE dataset.
|
37 |
+
Further details can be found in our [project page](https://rotsteinnoam.github.io/Paint-by-Inpaint) and [paper](arxiv.org/abs/2404.18212).
|
38 |
+
|
39 |
+
## Columns
|
40 |
+
|
41 |
+
- `mask`: The removed object mask used for creating the inpainted image.
|
42 |
+
- `target_img_dataset`: The dataset to which the target image belongs.
|
43 |
+
- `img_id`: The unique identifier of the GT image (the target image).
|
44 |
+
- `ann_id`: The identifier of the object segmentation annotation of the object removed.
|
45 |
+
|
46 |
+
## Loading the PIPE Masks Dataset
|
47 |
+
|
48 |
+
Here is an example of how to load and use this dataset with the `datasets` library:
|
49 |
+
|
50 |
+
```python
|
51 |
+
from datasets import load_dataset
|
52 |
+
|
53 |
+
data_files = {"train": "data/train-*", "test": "data/test-*"}
|
54 |
+
dataset_masks = load_dataset('paint-by-inpaint/PIPE_Masks',data_files=data_files)
|
55 |
+
|
56 |
+
# Display an example
|
57 |
+
example_train_mask = dataset_masks['train'][0]
|
58 |
+
print(example_train_mask)
|
59 |
+
|
60 |
+
example_test_mask = dataset_masks['test'][0]
|
61 |
+
print(example_test_mask)
|