Update README.md
Browse files
README.md
CHANGED
@@ -50,7 +50,7 @@ It also contains data from more domains:
|
|
50 |
|
51 |
## Data Structure
|
52 |
|
53 |
-
There are 2 directories, one for
|
54 |
Each subdirectory contains a JSONL file for the training, validation, and test data.
|
55 |
Here's how to get the data using Huggingface's `datasets` library:
|
56 |
|
@@ -63,7 +63,7 @@ dataset = load_dataset("stanfordnlp/shp-2")
|
|
63 |
# Load one of the subreddits
|
64 |
dataset = load_dataset("stanfordnlp/shp-2", data_dir="reddit/askculinary")
|
65 |
|
66 |
-
# Load one of the
|
67 |
dataset = load_dataset("stanfordnlp/shp-2", data_dir="stackexchange/stack_academia")
|
68 |
```
|
69 |
|
@@ -152,8 +152,6 @@ stack_unix, stack_android, stack_academia, stack_superuser, stack_tex, stack_pho
|
|
152 |
|
153 |
|
154 |
### Data Selection
|
155 |
-
TODO: check if this section holds for stack
|
156 |
-
|
157 |
For Reddit, the score of a post/comment is 1 plus the number of upvotes (approvals) it gets from users, minus the number of downvotes (disapprovals) it gets.
|
158 |
For Stackexchange, the score of a post/comment is 0 plus the number of upvotes (approvals) it gets from users, minus the number of downvotes (disapprovals) it gets.
|
159 |
The value of a score is relative; in domains(posts) with more traffic, there will be more higher-scoring posts(comments).
|
@@ -161,28 +159,24 @@ Within a post, comments posted earlier will tend to have a higher score simply d
|
|
161 |
|
162 |
Given a post P and two comments (A,B) we only included the preference A > B in the dataset if
|
163 |
1. A was written *no later than* B and A has a higher score than B.
|
164 |
-
2. The post is a self-post (i.e., a body of text and not a link to another page) made before 2023, was not edited, and is not NSFW (over 18). For Stackexchange,
|
165 |
3. Neither comment was made by a deleted user, a moderator, or the post creator. The post was not made by a deleted user or moderator.
|
166 |
-
4.
|
|
|
|
|
|
|
167 |
|
168 |
A post with `n` comments could have up to (`n` choose `2`) preferences in the data.
|
169 |
-
Since the number of comments per post is Pareto-distributed, to prevent a relatively small number of posts from dominating the data, we limited the scraping to 50 comments per post.
|
170 |
This means that each post could have up to (`50` choose `2`) comments in the dataset, though this is a much smaller number in practice, since all the criteria above need to be met.
|
171 |
-
|
172 |
-
Reddit makes it very difficult to get anything beyond the top 1000 posts for each subreddit.
|
173 |
-
We started with the top-scoring 1000 posts (of all time) and searched for the 25 most similar posts to each one using Reddit's search function to get up to 7500 unique post IDs per subreddit.
|
174 |
|
175 |
|
176 |
### Reddit Preprocessing
|
177 |
-
TODO: add stack preprocessing?
|
178 |
-
|
179 |
We tried to keep preprocessing to a minimum. Subreddit-specific abbreviations were expanded (e.g., "CMV" to "Change my view that").
|
180 |
In hyperlinks, only the referring text was kept and the URL was removed (if the URL was written out, then it was kept).
|
181 |
|
182 |
|
183 |
-
## Building a Preference Model
|
184 |
-
TODO: train a new model on all data?
|
185 |
-
|
186 |
### Finetuning
|
187 |
|
188 |
If you want to finetune a model to predict human preferences (e.g., for NLG evaluation or an RLHF reward model), here are some helpful tips:
|
@@ -192,42 +186,24 @@ If you want to finetune a model to predict human preferences (e.g., for NLG eval
|
|
192 |
To avoid this, truncate the post text (in the `history` field) as much as possible, such that the whole input is under 512 tokens (do not truncate the comment(s) however).
|
193 |
If this is still over 512 tokens, simply skip the example.
|
194 |
2. **Use a sufficiently large model.**
|
195 |
-
Finetuning a single FLAN-T5-xl model across [the 385K SHP training data](https://huggingface.co/datasets/stanfordnlp/SHP) should give you a test accuracy between 72-73% (across all domains on examples where the entire input fits within the token limit), ranging from 65-80% on individual subreddits.
|
196 |
-
3. **Do in-domain prediction.** Out-of-domain performance will be poor if the
|
197 |
4. **Train for fewer epochs.** The InstructGPT paper paper suggests training a reward model for only 1 epoch.
|
198 |
Since the same comment appears in multiple preferences, it is easy to overfit to the data.
|
199 |
5. **Training on less data may help.**
|
200 |
Preferences with a large `score_ratio` (e.g., comment A having 2x the score of comment B) will provide a stronger signal for finetuning the model, so you may only want to consider preferences above a certain `score_ratio`.
|
201 |
The number of preferences per post is Pareto-distributed, so to prevent the model from over-fitting to certain posts, you may want to limit the number of preferences from a particular post.
|
202 |
|
203 |
-
<!-- ### Evaluating
|
204 |
-
|
205 |
-
Since it is easier to predict strongly-held preferences than weakly-held ones, instead of reporting a single accuracy value, we recommend reporting a performance curve as a function of the `score_ratio`.
|
206 |
-
For example, here is the accuracy curve for a FLAN-T5-xl model trained on the askculinary data using the suggestions above.
|
207 |
-
The orange line is from finetuning only on preferences with a 2+ score ratio and using no more than 5 preferences from each post to prevent overfitting:
|
208 |
-
|
209 |
-
![Graph](curve.png)
|
210 |
-
|
211 |
-
We see that finetuning on less -- but higher quality -- data leads to higher accuracies on test data with a score ratio below 3.5, with no real downsides!
|
212 |
-
Note that any examples whose inputs did not fit within the token limit were left out of the experiment, since the model could not be expected to handle them. -->
|
213 |
-
|
214 |
-
### SteamSHP - An Open-Source Preference Model
|
215 |
-
|
216 |
-
We have finetuned two FLAN-T5 models on both the SHP dataset and the helpfulness data from Anthropic's HH-RLHF. They are
|
217 |
-
- [SteamSHP-XL](https://huggingface.co/stanfordnlp/SteamSHP-flan-t5-xl), a 3B parameter model that achieves 72.8% on the test data.
|
218 |
-
- [SteamSHP-Large](https://huggingface.co/stanfordnlp/SteamSHP-flan-t5-large), a 780M parameter model that achieves 72.0% on the test data.
|
219 |
-
|
220 |
-
We encourage you to use SteamSHP for NLG evaluation, for building reward models for RLHF, or for another purpose you deem fit!
|
221 |
-
|
222 |
|
223 |
## Biases and Limitations
|
224 |
|
225 |
### Biases
|
226 |
|
227 |
-
Although we filtered out posts with NSFW (over 18) content, chose
|
228 |
The data does not reflect the views of the dataset creators.
|
229 |
-
Reddit
|
230 |
Although subreddit-specific demographic information is not available, Reddit users overall are disproportionately male and from developed, Western, and English-speaking countries ([Pew Research](https://www.pewresearch.org/internet/2013/07/03/6-of-online-adults-are-reddit-users/)).
|
|
|
231 |
Please keep this in mind before using any models trained on this data.
|
232 |
|
233 |
### Limitations
|
@@ -249,23 +225,34 @@ Thanks to Greg Stoddard for pointing this out.
|
|
249 |
|
250 |
## License
|
251 |
|
252 |
-
Last updated:
|
|
|
|
|
253 |
|
254 |
-
|
255 |
According to the Terms of Use, "User Content" is owned by the users themselves -- not by Reddit -- and Reddit grants a "non-exclusive, non-transferable, non-sublicensable, and revocable license to copy and display the User Content".
|
|
|
|
|
256 |
|
257 |
Datasets made by scraping Reddit are widely used in the research community: for example, Facebook AI Research used data scraped from Reddit to make the [ELI5](https://huggingface.co/datasets/eli5#source-data) dataset in 2019, which was made available without a license.
|
258 |
Anthropic AI has also [attested to scraping Reddit](https://arxiv.org/pdf/2112.00861.pdf) for preferences using a different methodology, though this data was not made public.
|
259 |
-
The [PushShift Reddit dataset](https://arxiv.org/abs/2001.08435), which makes entire dumps of Reddit available on a regular schedule, is also made available without a license (to our knowledge).
|
260 |
|
261 |
We take no responsibility for and we do not expressly or implicitly endorse any downstream use of this dataset.
|
262 |
We reserve the right to modify the SHP dataset and this license at any point in the future.
|
263 |
|
|
|
|
|
|
|
|
|
264 |
|
265 |
## Contact
|
266 |
|
267 |
Please contact [email protected] if you have any questions about the data.
|
268 |
-
This dataset was created by Kawin Ethayarajh, Heidi (Chenyu) Zhang,
|
|
|
|
|
|
|
|
|
269 |
|
270 |
## Citation
|
271 |
|
@@ -285,8 +272,3 @@ We will have a paper out soon, but until then, please cite:
|
|
285 |
publisher = {PMLR},
|
286 |
}
|
287 |
```
|
288 |
-
|
289 |
-
## References
|
290 |
-
|
291 |
-
Ethayarajh, K., Choi, Y. & Swayamdipta, S. (2022). Understanding Dataset Difficulty with $\mathcal{V}$-Usable Information. <i>Proceedings of the 39th International Conference on Machine Learning</i>, in <i>Proceedings of Machine Learning Research</i>. 162:5988-6008 Available from https://proceedings.mlr.press/v162/ethayarajh22a.html.
|
292 |
-
|
|
|
50 |
|
51 |
## Data Structure
|
52 |
|
53 |
+
There are 2 directories, one for Reddit and one for StackExchange. There are 70 subdirectories under `reddit/`, one for each subreddit, and 59 subdirectories under `stackexchange/`, one for each stackexchange site.
|
54 |
Each subdirectory contains a JSONL file for the training, validation, and test data.
|
55 |
Here's how to get the data using Huggingface's `datasets` library:
|
56 |
|
|
|
63 |
# Load one of the subreddits
|
64 |
dataset = load_dataset("stanfordnlp/shp-2", data_dir="reddit/askculinary")
|
65 |
|
66 |
+
# Load one of the StackExchange sites
|
67 |
dataset = load_dataset("stanfordnlp/shp-2", data_dir="stackexchange/stack_academia")
|
68 |
```
|
69 |
|
|
|
152 |
|
153 |
|
154 |
### Data Selection
|
|
|
|
|
155 |
For Reddit, the score of a post/comment is 1 plus the number of upvotes (approvals) it gets from users, minus the number of downvotes (disapprovals) it gets.
|
156 |
For Stackexchange, the score of a post/comment is 0 plus the number of upvotes (approvals) it gets from users, minus the number of downvotes (disapprovals) it gets.
|
157 |
The value of a score is relative; in domains(posts) with more traffic, there will be more higher-scoring posts(comments).
|
|
|
159 |
|
160 |
Given a post P and two comments (A,B) we only included the preference A > B in the dataset if
|
161 |
1. A was written *no later than* B and A has a higher score than B.
|
162 |
+
2. The post is a self-post (i.e., a body of text and not a link to another page) made before 2023, was not edited, and is not NSFW (over 18). For Stackexchange, edited posts were permitted as long as they were edited prior to the writing of the comments.
|
163 |
3. Neither comment was made by a deleted user, a moderator, or the post creator. The post was not made by a deleted user or moderator.
|
164 |
+
4. For Reddit, the post has a score >= 10 and each comment has a score >= 2 (upvoted at least once). For Stackexchange, the post has a score >= 5 and each comment has a non-zero score.
|
165 |
+
|
166 |
+
The conditions are laxer for StackExchange because it is more strictly moderataed than Reddit, allowing us to hit the same data quality with lower thresholds.
|
167 |
+
In particular, we allow negative-score comments from StackExchange because the negative scores are likely due to being inaccurat/misinformed rather than being toxic, and this provides a useful signal.
|
168 |
|
169 |
A post with `n` comments could have up to (`n` choose `2`) preferences in the data.
|
170 |
+
Since the number of comments per post is Pareto-distributed, to prevent a relatively small number of posts from dominating the Reddit data, we limited the scraping to 50 comments per post.
|
171 |
This means that each post could have up to (`50` choose `2`) comments in the dataset, though this is a much smaller number in practice, since all the criteria above need to be met.
|
172 |
+
No such criteria are imposed for StackExchange, since there are fewer comments per post.
|
|
|
|
|
173 |
|
174 |
|
175 |
### Reddit Preprocessing
|
|
|
|
|
176 |
We tried to keep preprocessing to a minimum. Subreddit-specific abbreviations were expanded (e.g., "CMV" to "Change my view that").
|
177 |
In hyperlinks, only the referring text was kept and the URL was removed (if the URL was written out, then it was kept).
|
178 |
|
179 |
|
|
|
|
|
|
|
180 |
### Finetuning
|
181 |
|
182 |
If you want to finetune a model to predict human preferences (e.g., for NLG evaluation or an RLHF reward model), here are some helpful tips:
|
|
|
186 |
To avoid this, truncate the post text (in the `history` field) as much as possible, such that the whole input is under 512 tokens (do not truncate the comment(s) however).
|
187 |
If this is still over 512 tokens, simply skip the example.
|
188 |
2. **Use a sufficiently large model.**
|
189 |
+
Finetuning a single FLAN-T5-xl model across [the original 385K SHP training data](https://huggingface.co/datasets/stanfordnlp/SHP) should give you a test accuracy between 72-73% (across all domains on examples where the entire input fits within the token limit), ranging from 65-80% on individual subreddits.
|
190 |
+
3. **Do in-domain prediction.** Out-of-domain performance will be poor if the domains are unrelated (e.g., if you fine-tune on `askculinary` preferences and test on `askcarguys` preferences).
|
191 |
4. **Train for fewer epochs.** The InstructGPT paper paper suggests training a reward model for only 1 epoch.
|
192 |
Since the same comment appears in multiple preferences, it is easy to overfit to the data.
|
193 |
5. **Training on less data may help.**
|
194 |
Preferences with a large `score_ratio` (e.g., comment A having 2x the score of comment B) will provide a stronger signal for finetuning the model, so you may only want to consider preferences above a certain `score_ratio`.
|
195 |
The number of preferences per post is Pareto-distributed, so to prevent the model from over-fitting to certain posts, you may want to limit the number of preferences from a particular post.
|
196 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
197 |
|
198 |
## Biases and Limitations
|
199 |
|
200 |
### Biases
|
201 |
|
202 |
+
Although we filtered out posts with NSFW (over 18) content, chose domains that were well-moderated and had policies against harassment and bigotry, some of the data may contain discriminatory or harmful language.
|
203 |
The data does not reflect the views of the dataset creators.
|
204 |
+
Reddit and StackExchange users are also not representative of the broader population.
|
205 |
Although subreddit-specific demographic information is not available, Reddit users overall are disproportionately male and from developed, Western, and English-speaking countries ([Pew Research](https://www.pewresearch.org/internet/2013/07/03/6-of-online-adults-are-reddit-users/)).
|
206 |
+
This is likely also true of StackExchange users.
|
207 |
Please keep this in mind before using any models trained on this data.
|
208 |
|
209 |
### Limitations
|
|
|
225 |
|
226 |
## License
|
227 |
|
228 |
+
Last updated: 07/016/2023
|
229 |
+
|
230 |
+
### Reddit
|
231 |
|
232 |
+
The data was made by scraping publicly available data in accordance with the a historical version of [Reddit API Terms of Use](https://docs.google.com/a/reddit.com/forms/d/e/1FAIpQLSezNdDNK1-P8mspSbmtC2r86Ee9ZRbC66u929cG2GX0T9UMyw/viewform), without any direct communication or written agreements with Reddit.
|
233 |
According to the Terms of Use, "User Content" is owned by the users themselves -- not by Reddit -- and Reddit grants a "non-exclusive, non-transferable, non-sublicensable, and revocable license to copy and display the User Content".
|
234 |
+
At time of writing, Reddit grants "no other rights or licenses are granted or implied, including any right to use User Content for other purposes, such as for training a machine learning or artificial intelligence model, without the express permission of rightsholders in the applicable User Content."
|
235 |
+
However, the legality of training on publicly available data will depend on your jurisdiction (legal in Japan, for example).
|
236 |
|
237 |
Datasets made by scraping Reddit are widely used in the research community: for example, Facebook AI Research used data scraped from Reddit to make the [ELI5](https://huggingface.co/datasets/eli5#source-data) dataset in 2019, which was made available without a license.
|
238 |
Anthropic AI has also [attested to scraping Reddit](https://arxiv.org/pdf/2112.00861.pdf) for preferences using a different methodology, though this data was not made public.
|
|
|
239 |
|
240 |
We take no responsibility for and we do not expressly or implicitly endorse any downstream use of this dataset.
|
241 |
We reserve the right to modify the SHP dataset and this license at any point in the future.
|
242 |
|
243 |
+
### StackExchange
|
244 |
+
|
245 |
+
StackExchange data is made available under a [CC by-SA license](https://creativecommons.org/licenses/by-sa/4.0/).
|
246 |
+
|
247 |
|
248 |
## Contact
|
249 |
|
250 |
Please contact [email protected] if you have any questions about the data.
|
251 |
+
This dataset was created by Kawin Ethayarajh, Heidi (Chenyu) Zhang, and Shabnam Behzad with advice from Dan Jurafsky and Yizhong Wang.
|
252 |
+
Kawin and Heidi prepared the Reddit datasets and trained the SteamSHP models.
|
253 |
+
Kawin and Shabnam prepared the StackExchange data.
|
254 |
+
Dan and Yizhong provide advice on dataset construction.
|
255 |
+
|
256 |
|
257 |
## Citation
|
258 |
|
|
|
272 |
publisher = {PMLR},
|
273 |
}
|
274 |
```
|
|
|
|
|
|
|
|
|
|