kawine commited on
Commit
57f9a75
·
1 Parent(s): fd98c30

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -7
README.md CHANGED
@@ -23,7 +23,7 @@ SteamSHP is a preference model trained to predict human preferences, given some
23
  It can be used for NLG evaluation or to train a smaller reward model for RLHF.
24
 
25
  It is a FLAN-T5-xl model (3B parameters) finetuned on:
26
- 1. The [Stanford Human Preferences Dataset (SHP)](https://huggingface.co/datasets/stanfordnlp/SHP), which contains aggregate human preferences sourced from 18 different communities on Reddit (e.g., `askculinary`, `legaladvice`, etc.)
27
  2. The helpfulness data in [Anthropic's HH-RLHF](https://huggingface.co/datasets/Anthropic/hh-rlhf) dataset.
28
 
29
 
@@ -60,18 +60,21 @@ Which response is better? RESPONSE
60
 
61
  The output generated by SteamSHP will either be `A` or `B`.
62
 
63
- If the input exceeds the 512 token limit, you can use [pybsd](https://github.com/nipunsadvilkar/pySBD) to break the input up into sentences and only include that fits into 512 tokens.
 
64
 
65
 
66
  ## Training and Evaluation
67
 
68
  SteamSHP was only finetuned on 125K of the 392K training examples that were available, since we found that:
69
  1. When the total input length exceeded the limit (512 tokens), the loss would not converge.
70
- When possible, we crammed an example into 500 tokens by truncating the context as much as possible, though some examples would still not fit.
71
- 2. Training on fewer preferences with a stronger signal led to better performance than training on all the preferences.
72
- From the SHP dataset, we only used preferences where the more preferred comment was twice as preferred as the other (i.e., `score_ratio` >= 2) and used no more than 5 preferences from each context (i.e., `post_id`) to prevent ovefitting.
 
 
73
 
74
- We evaluated the model on the SHP and HH-RLHF test data using accuracies, but only on the data that could be truncated to fit within 500 tokens (a total of 18621 examples).
75
  SteamSHP gets an average 72.8% accuracy across all domains:
76
 
77
  | Domain | Accuracy |
@@ -105,7 +108,7 @@ Biases in the datasets used to train SteamSHP may be propagated downstream to th
105
  Although SHP filtered out posts with NSFW (over 18) content, chose subreddits that were well-moderated and had policies against harassment and bigotry, some of the data may contain discriminatory or harmful language.
106
  Reddit users on the subreddits covered by SHP are also not representative of the broader population. They are disproportionately from developed, Western, and English-speaking countries.
107
 
108
- It is also worth noting that the more preferred response in SHP or HH-RLHF is not necessarily the more correct one -- they just reflect a preference.
109
  [Past work](https://www.anthropic.com/model-written-evals.pdf) by Anthropic has found that models optimized for human preference can be obsequious, at the expense of the truth.
110
 
111
 
 
23
  It can be used for NLG evaluation or to train a smaller reward model for RLHF.
24
 
25
  It is a FLAN-T5-xl model (3B parameters) finetuned on:
26
+ 1. The [Stanford Human Preferences Dataset (SHP)](https://huggingface.co/datasets/stanfordnlp/SHP), which contains aggregate human preferences sourced from 18 different communities on Reddit (e.g., `askculinary`, `legaladvice`, etc.).
27
  2. The helpfulness data in [Anthropic's HH-RLHF](https://huggingface.co/datasets/Anthropic/hh-rlhf) dataset.
28
 
29
 
 
60
 
61
  The output generated by SteamSHP will either be `A` or `B`.
62
 
63
+ If the input exceeds the 512 token limit, you can use [pybsd](https://github.com/nipunsadvilkar/pySBD) to break the input up into sentences and only include what fits into 512 tokens.
64
+ When trying to cram an example into 512 tokens, we recommend truncating the context as much as possible and leaving the responses as untouched as possible.
65
 
66
 
67
  ## Training and Evaluation
68
 
69
  SteamSHP was only finetuned on 125K of the 392K training examples that were available, since we found that:
70
  1. When the total input length exceeded the limit (512 tokens), the loss would not converge.
71
+ When possible, we crammed an example to fit under 500 tokens by truncating the context as much as possible, though some examples would still not fit despite this.
72
+ We used 500 as the limit instead of 512 to allow for slight modifications to the structure of the input without any examples exceeding the actual 512 limit.
73
+ 3. Training on fewer preferences with a stronger signal led to better performance than training on all the preferences.
74
+ From the SHP dataset, we only used preferences where the more preferred comment was twice as preferred as the other (i.e., `score_ratio` >= 2) and used no more than 5 preferences from each context (i.e., 5 examples per unique `post_id`) to prevent ovefitting.
75
+ We did no such subsampling for the HH-RLHF training data.
76
 
77
+ We evaluated the model on the SHP and HH-RLHF test data using accuracy, but only on the data that could be truncated to fit within 500 tokens (a total of 18621 out of 20753 available test examples).
78
  SteamSHP gets an average 72.8% accuracy across all domains:
79
 
80
  | Domain | Accuracy |
 
108
  Although SHP filtered out posts with NSFW (over 18) content, chose subreddits that were well-moderated and had policies against harassment and bigotry, some of the data may contain discriminatory or harmful language.
109
  Reddit users on the subreddits covered by SHP are also not representative of the broader population. They are disproportionately from developed, Western, and English-speaking countries.
110
 
111
+ It is also worth noting that the more preferred response in SHP or HH-RLHF is not necessarily the more correct one -- the data just reflects the aggregate preference of Reddit users (in SHP's case) and individuals' preferences (in HH-RLHF's case).
112
  [Past work](https://www.anthropic.com/model-written-evals.pdf) by Anthropic has found that models optimized for human preference can be obsequious, at the expense of the truth.
113
 
114