Bai-YT
commited on
Commit
Β·
2d4e8f6
1
Parent(s):
ceb108e
Update README with dataset viewer configs
Browse files- .gitattributes +0 -55
- README.md +11 -6
- {pipeline β collection_pipeline}/clean_sites.py +0 -0
- {pipeline β collection_pipeline}/clean_sites_batch.py +0 -0
- {pipeline β collection_pipeline}/download_pages.py +0 -0
- {pipeline β collection_pipeline}/find_sites.py +0 -0
- collection_pipeline/pipeline.png +0 -0
- {pipeline β collection_pipeline}/update_model_name.py +0 -0
- {pipeline β collection_pipeline}/utils/cleaning_utils.py +0 -0
- {pipeline β collection_pipeline}/utils/file_utils.py +0 -0
- {pipeline β collection_pipeline}/utils/keywords.py +0 -0
- {pipeline β collection_pipeline}/utils/query_utils.py +0 -0
- pipeline.png +0 -0
- pipeline/pipeline.png +0 -3
.gitattributes
DELETED
@@ -1,55 +0,0 @@
|
|
1 |
-
*.7z filter=lfs diff=lfs merge=lfs -text
|
2 |
-
*.arrow filter=lfs diff=lfs merge=lfs -text
|
3 |
-
*.bin filter=lfs diff=lfs merge=lfs -text
|
4 |
-
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
5 |
-
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
6 |
-
*.ftz filter=lfs diff=lfs merge=lfs -text
|
7 |
-
*.gz filter=lfs diff=lfs merge=lfs -text
|
8 |
-
*.h5 filter=lfs diff=lfs merge=lfs -text
|
9 |
-
*.joblib filter=lfs diff=lfs merge=lfs -text
|
10 |
-
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
11 |
-
*.lz4 filter=lfs diff=lfs merge=lfs -text
|
12 |
-
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
13 |
-
*.model filter=lfs diff=lfs merge=lfs -text
|
14 |
-
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
15 |
-
*.npy filter=lfs diff=lfs merge=lfs -text
|
16 |
-
*.npz filter=lfs diff=lfs merge=lfs -text
|
17 |
-
*.onnx filter=lfs diff=lfs merge=lfs -text
|
18 |
-
*.ot filter=lfs diff=lfs merge=lfs -text
|
19 |
-
*.parquet filter=lfs diff=lfs merge=lfs -text
|
20 |
-
*.pb filter=lfs diff=lfs merge=lfs -text
|
21 |
-
*.pickle filter=lfs diff=lfs merge=lfs -text
|
22 |
-
*.pkl filter=lfs diff=lfs merge=lfs -text
|
23 |
-
*.pt filter=lfs diff=lfs merge=lfs -text
|
24 |
-
*.pth filter=lfs diff=lfs merge=lfs -text
|
25 |
-
*.rar filter=lfs diff=lfs merge=lfs -text
|
26 |
-
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
27 |
-
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
28 |
-
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
29 |
-
*.tar filter=lfs diff=lfs merge=lfs -text
|
30 |
-
*.tflite filter=lfs diff=lfs merge=lfs -text
|
31 |
-
*.tgz filter=lfs diff=lfs merge=lfs -text
|
32 |
-
*.wasm filter=lfs diff=lfs merge=lfs -text
|
33 |
-
*.xz filter=lfs diff=lfs merge=lfs -text
|
34 |
-
*.zip filter=lfs diff=lfs merge=lfs -text
|
35 |
-
*.zst filter=lfs diff=lfs merge=lfs -text
|
36 |
-
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
37 |
-
# Audio files - uncompressed
|
38 |
-
*.pcm filter=lfs diff=lfs merge=lfs -text
|
39 |
-
*.sam filter=lfs diff=lfs merge=lfs -text
|
40 |
-
*.raw filter=lfs diff=lfs merge=lfs -text
|
41 |
-
# Audio files - compressed
|
42 |
-
*.aac filter=lfs diff=lfs merge=lfs -text
|
43 |
-
*.flac filter=lfs diff=lfs merge=lfs -text
|
44 |
-
*.mp3 filter=lfs diff=lfs merge=lfs -text
|
45 |
-
*.ogg filter=lfs diff=lfs merge=lfs -text
|
46 |
-
*.wav filter=lfs diff=lfs merge=lfs -text
|
47 |
-
# Image files - uncompressed
|
48 |
-
*.bmp filter=lfs diff=lfs merge=lfs -text
|
49 |
-
*.gif filter=lfs diff=lfs merge=lfs -text
|
50 |
-
*.png filter=lfs diff=lfs merge=lfs -text
|
51 |
-
*.tiff filter=lfs diff=lfs merge=lfs -text
|
52 |
-
# Image files - compressed
|
53 |
-
*.jpg filter=lfs diff=lfs merge=lfs -text
|
54 |
-
*.jpeg filter=lfs diff=lfs merge=lfs -text
|
55 |
-
*.webp filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
README.md
CHANGED
@@ -12,17 +12,22 @@ tags:
|
|
12 |
- robustness
|
13 |
- llm
|
14 |
- injection
|
|
|
|
|
|
|
|
|
|
|
15 |
---
|
16 |
|
17 |
-
|
18 |
|
19 |
This repository contains the ***RAGDOLL*** (Retrieval-Augmented Generation Deceived Ordering via AdversariaL materiaLs) dataset as well as its LLM-automated collection pipeline.
|
20 |
|
21 |
The ***RAGDOLL*** dataset is from the paper [*Ranking Manipulation for Conversational Search Engines*](https://arxiv.org/pdf/2406.03589) from Samuel Pfrommer, Yatong Bai, Tanmay Gautam, and Somayeh Sojoudi. For experiment code associated with this paper, please refer to [this repository](https://github.com/spfrommer/cse-ranking-manipulation).
|
22 |
|
23 |
-
The dataset consists of 10 product categories (see [`
|
24 |
|
25 |
-
The URLs of the full 1147 products are shared at [`
|
26 |
|
27 |
|
28 |
### Description
|
@@ -48,12 +53,12 @@ When downloading webpages, it is highly recommended to download *dynamic* pages
|
|
48 |
- Use the `selenium` package to invoke web browsers (faster, more up-to-date).
|
49 |
- Download from CommonCraw (slower, more reproducible).
|
50 |
|
51 |
-
The downloading method is controlled with [`cc_fetch`](https://
|
52 |
|
53 |
|
54 |
### Collecting Your Own Dataset
|
55 |
|
56 |
-
You can use this data collection pipeline to collect additional websites or additional product categories. To do so, modify [`
|
57 |
|
58 |
Required packages:
|
59 |
```
|
@@ -63,7 +68,7 @@ click pandas torch requests bs4 lxml unidecode selenium openai cdx_toolkit
|
|
63 |
To query GPT-4-Turbo to collect a set of brands and products, run
|
64 |
```
|
65 |
python find_sites.py --model "gpt-4-turbo"
|
66 |
-
# feel free to replace with gpt-4o
|
67 |
```
|
68 |
|
69 |
To clean the dataset (with Google Search API and GPT-3.5-Turbo), run
|
|
|
12 |
- robustness
|
13 |
- llm
|
14 |
- injection
|
15 |
+
configs:
|
16 |
+
- config_name: default
|
17 |
+
data_files:
|
18 |
+
- split: all_urls
|
19 |
+
path: "webpage_links.csv"
|
20 |
---
|
21 |
|
22 |
+
# The *RAGDOLL* E-Commerce Webpage Dataset
|
23 |
|
24 |
This repository contains the ***RAGDOLL*** (Retrieval-Augmented Generation Deceived Ordering via AdversariaL materiaLs) dataset as well as its LLM-automated collection pipeline.
|
25 |
|
26 |
The ***RAGDOLL*** dataset is from the paper [*Ranking Manipulation for Conversational Search Engines*](https://arxiv.org/pdf/2406.03589) from Samuel Pfrommer, Yatong Bai, Tanmay Gautam, and Somayeh Sojoudi. For experiment code associated with this paper, please refer to [this repository](https://github.com/spfrommer/cse-ranking-manipulation).
|
27 |
|
28 |
+
The dataset consists of 10 product categories (see [`categories.md`](https://huggingface.co/datasets/Bai-YT/RAGDOLL/blob/main/README.md)), with at least 8 brands for each category and 1-3 products per brand, summing to 1147 products in total. The evaluations in our paper are performed with a balanced subset with precicely 8 brands per category and 1 product per brand.
|
29 |
|
30 |
+
The URLs of the full 1147 products are shared at [`webpage_links.csv`](https://huggingface.co/datasets/Bai-YT/RAGDOLL/blob/main/webpage_links.csv). We additionally share the downloaded webpages associated with the data subset used in our paper at [`webpage_contents`](https://huggingface.co/datasets/Bai-YT/RAGDOLL/tree/main/webpage_contents) for reproducibility.
|
31 |
|
32 |
|
33 |
### Description
|
|
|
53 |
- Use the `selenium` package to invoke web browsers (faster, more up-to-date).
|
54 |
- Download from CommonCraw (slower, more reproducible).
|
55 |
|
56 |
+
The downloading method is controlled with [`cc_fetch`](https://huggingface.co/datasets/Bai-YT/RAGDOLL/blob/a19ce2a29f7317aefdbfae4e469f28d4cfa25d21/collection_pipeline/utils/query_utils.py#L39).
|
57 |
|
58 |
|
59 |
### Collecting Your Own Dataset
|
60 |
|
61 |
+
You can use this data collection pipeline to collect additional websites or additional product categories. To do so, modify [`categories.md`](https://huggingface.co/datasets/Bai-YT/RAGDOLL/blob/main/README.md) accordingly and run the code with the following these instructions.
|
62 |
|
63 |
Required packages:
|
64 |
```
|
|
|
68 |
To query GPT-4-Turbo to collect a set of brands and products, run
|
69 |
```
|
70 |
python find_sites.py --model "gpt-4-turbo"
|
71 |
+
# feel free to replace with gpt-4o or other OpenAI models without code modification
|
72 |
```
|
73 |
|
74 |
To clean the dataset (with Google Search API and GPT-3.5-Turbo), run
|
{pipeline β collection_pipeline}/clean_sites.py
RENAMED
File without changes
|
{pipeline β collection_pipeline}/clean_sites_batch.py
RENAMED
File without changes
|
{pipeline β collection_pipeline}/download_pages.py
RENAMED
File without changes
|
{pipeline β collection_pipeline}/find_sites.py
RENAMED
File without changes
|
collection_pipeline/pipeline.png
ADDED
{pipeline β collection_pipeline}/update_model_name.py
RENAMED
File without changes
|
{pipeline β collection_pipeline}/utils/cleaning_utils.py
RENAMED
File without changes
|
{pipeline β collection_pipeline}/utils/file_utils.py
RENAMED
File without changes
|
{pipeline β collection_pipeline}/utils/keywords.py
RENAMED
File without changes
|
{pipeline β collection_pipeline}/utils/query_utils.py
RENAMED
File without changes
|
pipeline.png
ADDED
pipeline/pipeline.png
DELETED
Git LFS Details
|