ChenyuHeidiZhang commited on
Commit
ed843d2
·
1 Parent(s): d4da583

test with new data

Browse files
README.md CHANGED
@@ -12,18 +12,19 @@ tags:
12
  - NLG
13
  - evaluation
14
  size_categories:
15
- - 100K<n<1M
16
  language:
17
  - en
18
  ---
19
- # 🚢 Stanford Human Preferences Dataset (SHP)
20
 
21
  ## Summary
22
 
23
- SHP is a dataset of **385K collective human preferences** over responses to questions/instructions in 18 different subject areas, from cooking to legal advice.
 
24
  The preferences are meant to reflect the helpfulness of one response over another, and are intended to be used for training RLHF reward models and NLG evaluation models (e.g., [SteamSHP](https://huggingface.co/stanfordnlp/SteamSHP-flan-t5-xl)).
25
 
26
- Each example is a Reddit post with a question/instruction and a pair of top-level comments for that post, where one comment is more preferred by Reddit users (collectively).
27
  SHP exploits the fact that if comment A was written *after* comment B but has a higher score nonetheless, then A is ostensibly more preferred to B.
28
  If A had been written before B, then we could not conclude this, since its higher score could have been the result of more visibility.
29
  We chose data where the preference label is intended to reflect which response is more *helpful* rather than which is less *harmful*, the latter being the focus of much past work.
@@ -33,7 +34,7 @@ Most notably, all the data in SHP is naturally occurring and human-written, wher
33
 
34
  | Dataset | Size | Input | Label | Domains | Data Format | Length |
35
  | -------------------- | ---- | -------------------------- | ---------------------------- | ------------------------- | ------------------------------------- | --------------- |
36
- | SHP | 385K | Naturally occurring human-written responses | Collective Human Preference | 18 (labelled) | Question/Instruction + Response (Single-turn) | up to 10.1K T5 tokens |
37
  | HH-RLHF | 91K | Dialogue with LLM | Individual Human Preference | not labelled | Live Chat (Multi-turn) | up to 1.5K T5 tokens |
38
 
39
  How is SHP different from other datasets that have scraped Reddit, like [ELI5](https://huggingface.co/datasets/eli5#source-data)?
@@ -42,7 +43,7 @@ It also contains data from more domains:
42
 
43
  | Dataset | Size | Comments + Scores | Preferences | Number of Domains |
44
  | -------------------- | ---- | ------------------ | -------------| ------------------ |
45
- | SHP | 385K | Yes | Yes | 18 |
46
  | ELI5 | 270K | Yes | No | 3 |
47
 
48
 
@@ -55,13 +56,13 @@ Here's how to get the data using Huggingface's `datasets` library:
55
  from datasets import load_dataset
56
 
57
  # Load all the data
58
- dataset = load_dataset("stanfordnlp/shp")
59
 
60
  # Load one of the subreddits
61
- dataset = load_dataset("stanfordnlp/shp", data_dir="askculinary")
62
  ```
63
 
64
- Here's an example from `askculinary/train.json`:
65
  ```
66
  {
67
  `post_id`:"qt3nxl",
 
12
  - NLG
13
  - evaluation
14
  size_categories:
15
+ - 1M<n<10M
16
  language:
17
  - en
18
  ---
19
+ # 🚢 Stanford Human Preferences Dataset v2 (SHP-2)
20
 
21
  ## Summary
22
 
23
+ SHP-2 is a dataset of **4.8M collective human preferences** over responses to questions/instructions in 129 different subject areas, from cooking to legal advice. It is an extended version of the original 385K [SHP dataset](https://huggingface.co/datasets/stanfordnlp/SHP).
24
+
25
  The preferences are meant to reflect the helpfulness of one response over another, and are intended to be used for training RLHF reward models and NLG evaluation models (e.g., [SteamSHP](https://huggingface.co/stanfordnlp/SteamSHP-flan-t5-xl)).
26
 
27
+ Each example is a Reddit or StackExchange post with a question/instruction and a pair of top-level comments for that post, where one comment is more preferred by Reddit / StackExchange users (collectively).
28
  SHP exploits the fact that if comment A was written *after* comment B but has a higher score nonetheless, then A is ostensibly more preferred to B.
29
  If A had been written before B, then we could not conclude this, since its higher score could have been the result of more visibility.
30
  We chose data where the preference label is intended to reflect which response is more *helpful* rather than which is less *harmful*, the latter being the focus of much past work.
 
34
 
35
  | Dataset | Size | Input | Label | Domains | Data Format | Length |
36
  | -------------------- | ---- | -------------------------- | ---------------------------- | ------------------------- | ------------------------------------- | --------------- |
37
+ | SHP-2 | 4.8M | Naturally occurring human-written responses | Collective Human Preference | 129 (labelled) | Question/Instruction + Response (Single-turn) | up to 10.1K T5 tokens |
38
  | HH-RLHF | 91K | Dialogue with LLM | Individual Human Preference | not labelled | Live Chat (Multi-turn) | up to 1.5K T5 tokens |
39
 
40
  How is SHP different from other datasets that have scraped Reddit, like [ELI5](https://huggingface.co/datasets/eli5#source-data)?
 
43
 
44
  | Dataset | Size | Comments + Scores | Preferences | Number of Domains |
45
  | -------------------- | ---- | ------------------ | -------------| ------------------ |
46
+ | SHP-2 | 4.8M | Yes | Yes | 129 (70 from Reddit, 59 from StackExchange) |
47
  | ELI5 | 270K | Yes | No | 3 |
48
 
49
 
 
56
  from datasets import load_dataset
57
 
58
  # Load all the data
59
+ dataset = load_dataset("stanfordnlp/shp-2")
60
 
61
  # Load one of the subreddits
62
+ dataset = load_dataset("stanfordnlp/shp-2", data_dir="reddit/askculinary")
63
  ```
64
 
65
+ Here's an example from `reddit/askculinary/train.json`:
66
  ```
67
  {
68
  `post_id`:"qt3nxl",
stackexchange/stack_academia/test.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:77b7d82c625463433f11cb14ddcdc817b85e328d20fecbf4d7933e79973db49b
3
+ size 1067151
stackexchange/stack_academia/train.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a3694433d50610a1c653537f2c104dcbbcaaf51958afd8a525c206e6bc3ea1e6
3
+ size 25012875
stackexchange/stack_academia/validation.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:73055bdbe492a5c55a5d02c20369301b19822f98658da5c0a7fbd944e4c6c935
3
+ size 1232052
stackexchange/stack_android/test.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d3c9dbdaa75d0d13026dc28c18913d179a751dc39ecd546331a09386a4882b42
3
+ size 195287
stackexchange/stack_android/train.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1d3e3b9bcfa9181f4603f4c049541b29fbbfb9e0bf0b6b7a6577f1031202397a
3
+ size 2180355
stackexchange/stack_android/validation.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dc099f0d2867b6093333544705e04bdf14a4cc7d546630b8067d5bba57adf0d6
3
+ size 109010
stackexchange/stack_apple/test.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:823f5da9ba57a5d9221f2daaf862ce26aefd8f1c7bf752a7a2bafb255c559c40
3
+ size 824978
stackexchange/stack_apple/train.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:36363f786b1b9f52837b6fb6a866b3ef299f56f4dea5b4ffaa2a9a1a8d163378
3
+ size 14326265
stackexchange/stack_apple/validation.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7b6ba9fdfaca6d5a3cc0d461e4ce0ef6bcd9ce976e6aed4d3247105a77833ba6
3
+ size 718357
stackexchange/stack_arduino/test.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0519fa305efb42ee21bf731bb6a5d9f22eadabbf32bc9e417bbca82fc2b5c614
3
+ size 59120
stackexchange/stack_arduino/train.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:069f41674d9161ae4aee4bba024f505142824fd3c1188faa50e4837d0d3c738f
3
+ size 1229291
stackexchange/stack_arduino/validation.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1f99cc582bebd61c8f0d82720b31efaa6ba9493afb2ff5d50cb502b41defb488
3
+ size 73433
stackexchange/stack_askubuntu/test.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:14521550427f3a20c57fd8cc214cb8d697e62bb83f6fb1b5d8a22f4c5aede53a
3
+ size 1703214
stackexchange/stack_askubuntu/train.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f58c543417082042b46cd9bdb93a8c8bbf5645a46e12a6c76dea6963018f91b4
3
+ size 30860762
stackexchange/stack_askubuntu/validation.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a1744d6760cc9eae8d9450564fb8993700d033f7e51937836f99b2e60e927d87
3
+ size 1718593
stackexchange/stack_aviation/test.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9e052a7aa41dd70aac7af66ab91a4e1e25f60586f8dcbd131d6be0aaae40e23b
3
+ size 285006
stackexchange/stack_aviation/train.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c70e25d740547aff739fc9feb8df5269e87d2cecfcd48086601ec2178da7689c
3
+ size 7031848
stackexchange/stack_aviation/validation.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:36ead4c5492700ddeb5bbec8cf75088b1f173c492a3ac26160a5b9daccdad62a
3
+ size 360754