BraylonDash commited on
Commit
aabee42
·
verified ·
1 Parent(s): cf268a8

Model save

Browse files
README.md ADDED
@@ -0,0 +1,60 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ library_name: peft
4
+ tags:
5
+ - trl
6
+ - dpo
7
+ - generated_from_trainer
8
+ base_model: microsoft/phi-2
9
+ model-index:
10
+ - name: phi-2-gpo-renew2-b0.001-vllm-merge-20k-complete-refSFT-i1
11
+ results: []
12
+ ---
13
+
14
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
+ should probably proofread and complete it, then remove this comment. -->
16
+
17
+ # phi-2-gpo-renew2-b0.001-vllm-merge-20k-complete-refSFT-i1
18
+
19
+ This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on the None dataset.
20
+
21
+ ## Model description
22
+
23
+ More information needed
24
+
25
+ ## Intended uses & limitations
26
+
27
+ More information needed
28
+
29
+ ## Training and evaluation data
30
+
31
+ More information needed
32
+
33
+ ## Training procedure
34
+
35
+ ### Training hyperparameters
36
+
37
+ The following hyperparameters were used during training:
38
+ - learning_rate: 5e-06
39
+ - train_batch_size: 4
40
+ - eval_batch_size: 4
41
+ - seed: 42
42
+ - distributed_type: multi-GPU
43
+ - gradient_accumulation_steps: 4
44
+ - total_train_batch_size: 16
45
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
46
+ - lr_scheduler_type: cosine
47
+ - lr_scheduler_warmup_ratio: 0.1
48
+ - num_epochs: 1
49
+
50
+ ### Training results
51
+
52
+
53
+
54
+ ### Framework versions
55
+
56
+ - PEFT 0.7.1
57
+ - Transformers 4.36.2
58
+ - Pytorch 2.1.2
59
+ - Datasets 2.14.6
60
+ - Tokenizers 0.15.2
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:553a5d305cbf175f60dc474b9c19dad798d2db6132ae92146a6edbdde11954a8
3
  size 167807296
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:04c148fe510f82672c6b7651324dca8d431a1e78a642e6d3e3970714419ad36e
3
  size 167807296
all_results.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 1.0,
3
+ "train_loss": 0.12703038393855096,
4
+ "train_runtime": 9089.2921,
5
+ "train_samples": 61135,
6
+ "train_samples_per_second": 2.2,
7
+ "train_steps_per_second": 0.138
8
+ }
runs/May10_12-46-31_gpu4-119-5/events.out.tfevents.1715309304.gpu4-119-5.2835564.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:68f285dbcd90f2092864a82529884c462c78bcadc005e7c219368c2ed4937c9d
3
- size 81532
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cbd4e31be92ba3d1a29daefe754bb27084d8f04869d904019d4aeac598f7820d
3
+ size 85056
train_results.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 1.0,
3
+ "train_loss": 0.12703038393855096,
4
+ "train_runtime": 9089.2921,
5
+ "train_samples": 61135,
6
+ "train_samples_per_second": 2.2,
7
+ "train_steps_per_second": 0.138
8
+ }
trainer_state.json ADDED
@@ -0,0 +1,1794 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": null,
3
+ "best_model_checkpoint": null,
4
+ "epoch": 1.0,
5
+ "eval_steps": 500,
6
+ "global_step": 1250,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "epoch": 0.0,
13
+ "learning_rate": 4e-08,
14
+ "logits/chosen": 1.175201177597046,
15
+ "logits/rejected": 1.1360995769500732,
16
+ "logps/chosen": -464.5503234863281,
17
+ "logps/rejected": -320.97235107421875,
18
+ "loss": 0.1112,
19
+ "rewards/accuracies": 0.0,
20
+ "rewards/chosen": 0.0,
21
+ "rewards/margins": 0.0,
22
+ "rewards/rejected": 0.0,
23
+ "step": 1
24
+ },
25
+ {
26
+ "epoch": 0.01,
27
+ "learning_rate": 4.0000000000000003e-07,
28
+ "logits/chosen": 1.2028732299804688,
29
+ "logits/rejected": 1.0901875495910645,
30
+ "logps/chosen": -496.0768127441406,
31
+ "logps/rejected": -346.5080871582031,
32
+ "loss": 0.1476,
33
+ "rewards/accuracies": 0.3263888955116272,
34
+ "rewards/chosen": -6.533772102557123e-05,
35
+ "rewards/margins": -9.996786684496328e-05,
36
+ "rewards/rejected": 3.4630156733328477e-05,
37
+ "step": 10
38
+ },
39
+ {
40
+ "epoch": 0.02,
41
+ "learning_rate": 8.000000000000001e-07,
42
+ "logits/chosen": 1.1378858089447021,
43
+ "logits/rejected": 1.0073139667510986,
44
+ "logps/chosen": -502.5614318847656,
45
+ "logps/rejected": -353.5082092285156,
46
+ "loss": 0.1581,
47
+ "rewards/accuracies": 0.4375,
48
+ "rewards/chosen": 2.500582195352763e-05,
49
+ "rewards/margins": -1.3462633432936855e-05,
50
+ "rewards/rejected": 3.8468460843432695e-05,
51
+ "step": 20
52
+ },
53
+ {
54
+ "epoch": 0.02,
55
+ "learning_rate": 1.2000000000000002e-06,
56
+ "logits/chosen": 1.1466845273971558,
57
+ "logits/rejected": 1.0465342998504639,
58
+ "logps/chosen": -480.00213623046875,
59
+ "logps/rejected": -354.7355651855469,
60
+ "loss": 0.1352,
61
+ "rewards/accuracies": 0.5375000238418579,
62
+ "rewards/chosen": 0.00011489469034131616,
63
+ "rewards/margins": 5.035057620261796e-05,
64
+ "rewards/rejected": 6.454410322476178e-05,
65
+ "step": 30
66
+ },
67
+ {
68
+ "epoch": 0.03,
69
+ "learning_rate": 1.6000000000000001e-06,
70
+ "logits/chosen": 1.0807245969772339,
71
+ "logits/rejected": 1.0223329067230225,
72
+ "logps/chosen": -480.26605224609375,
73
+ "logps/rejected": -352.97113037109375,
74
+ "loss": 0.1494,
75
+ "rewards/accuracies": 0.518750011920929,
76
+ "rewards/chosen": 0.00020178986596874893,
77
+ "rewards/margins": 5.12802398588974e-05,
78
+ "rewards/rejected": 0.0001505096151959151,
79
+ "step": 40
80
+ },
81
+ {
82
+ "epoch": 0.04,
83
+ "learning_rate": 2.0000000000000003e-06,
84
+ "logits/chosen": 1.1475070714950562,
85
+ "logits/rejected": 1.1084539890289307,
86
+ "logps/chosen": -486.21875,
87
+ "logps/rejected": -360.5682373046875,
88
+ "loss": 0.1481,
89
+ "rewards/accuracies": 0.5,
90
+ "rewards/chosen": 0.0002630269154906273,
91
+ "rewards/margins": 0.00012177646567579359,
92
+ "rewards/rejected": 0.00014125046436674893,
93
+ "step": 50
94
+ },
95
+ {
96
+ "epoch": 0.05,
97
+ "learning_rate": 2.4000000000000003e-06,
98
+ "logits/chosen": 1.1609418392181396,
99
+ "logits/rejected": 1.028495192527771,
100
+ "logps/chosen": -490.59375,
101
+ "logps/rejected": -344.62884521484375,
102
+ "loss": 0.1493,
103
+ "rewards/accuracies": 0.581250011920929,
104
+ "rewards/chosen": 0.0005147407646290958,
105
+ "rewards/margins": 0.0002236834552604705,
106
+ "rewards/rejected": 0.0002910573093686253,
107
+ "step": 60
108
+ },
109
+ {
110
+ "epoch": 0.06,
111
+ "learning_rate": 2.8000000000000003e-06,
112
+ "logits/chosen": 1.0503188371658325,
113
+ "logits/rejected": 1.0258784294128418,
114
+ "logps/chosen": -476.06292724609375,
115
+ "logps/rejected": -371.3968200683594,
116
+ "loss": 0.151,
117
+ "rewards/accuracies": 0.59375,
118
+ "rewards/chosen": 0.0010026374366134405,
119
+ "rewards/margins": 0.00048288650577887893,
120
+ "rewards/rejected": 0.0005197509890422225,
121
+ "step": 70
122
+ },
123
+ {
124
+ "epoch": 0.06,
125
+ "learning_rate": 3.2000000000000003e-06,
126
+ "logits/chosen": 1.1313526630401611,
127
+ "logits/rejected": 1.0380247831344604,
128
+ "logps/chosen": -487.41448974609375,
129
+ "logps/rejected": -360.5604248046875,
130
+ "loss": 0.1481,
131
+ "rewards/accuracies": 0.668749988079071,
132
+ "rewards/chosen": 0.0016277392860502005,
133
+ "rewards/margins": 0.0009324050624854863,
134
+ "rewards/rejected": 0.0006953343981876969,
135
+ "step": 80
136
+ },
137
+ {
138
+ "epoch": 0.07,
139
+ "learning_rate": 3.6000000000000003e-06,
140
+ "logits/chosen": 1.1803278923034668,
141
+ "logits/rejected": 1.005684494972229,
142
+ "logps/chosen": -474.18878173828125,
143
+ "logps/rejected": -357.3034973144531,
144
+ "loss": 0.1409,
145
+ "rewards/accuracies": 0.6000000238418579,
146
+ "rewards/chosen": 0.002065226435661316,
147
+ "rewards/margins": 0.0008295776206068695,
148
+ "rewards/rejected": 0.0012356489896774292,
149
+ "step": 90
150
+ },
151
+ {
152
+ "epoch": 0.08,
153
+ "learning_rate": 4.000000000000001e-06,
154
+ "logits/chosen": 1.1424959897994995,
155
+ "logits/rejected": 1.037285327911377,
156
+ "logps/chosen": -485.64605712890625,
157
+ "logps/rejected": -341.20599365234375,
158
+ "loss": 0.1335,
159
+ "rewards/accuracies": 0.6812499761581421,
160
+ "rewards/chosen": 0.002735253656283021,
161
+ "rewards/margins": 0.0017322547500953078,
162
+ "rewards/rejected": 0.001002999022603035,
163
+ "step": 100
164
+ },
165
+ {
166
+ "epoch": 0.09,
167
+ "learning_rate": 4.4e-06,
168
+ "logits/chosen": 1.0868542194366455,
169
+ "logits/rejected": 1.029909372329712,
170
+ "logps/chosen": -492.3543395996094,
171
+ "logps/rejected": -320.5608215332031,
172
+ "loss": 0.1337,
173
+ "rewards/accuracies": 0.737500011920929,
174
+ "rewards/chosen": 0.005243994295597076,
175
+ "rewards/margins": 0.0035049724392592907,
176
+ "rewards/rejected": 0.001739021041430533,
177
+ "step": 110
178
+ },
179
+ {
180
+ "epoch": 0.1,
181
+ "learning_rate": 4.800000000000001e-06,
182
+ "logits/chosen": 1.1244226694107056,
183
+ "logits/rejected": 1.0855529308319092,
184
+ "logps/chosen": -462.7578125,
185
+ "logps/rejected": -330.9637145996094,
186
+ "loss": 0.1543,
187
+ "rewards/accuracies": 0.7437499761581421,
188
+ "rewards/chosen": 0.006767258048057556,
189
+ "rewards/margins": 0.004428135231137276,
190
+ "rewards/rejected": 0.0023391232825815678,
191
+ "step": 120
192
+ },
193
+ {
194
+ "epoch": 0.1,
195
+ "learning_rate": 4.999756310023261e-06,
196
+ "logits/chosen": 1.1845452785491943,
197
+ "logits/rejected": 1.0370099544525146,
198
+ "logps/chosen": -475.958251953125,
199
+ "logps/rejected": -337.60552978515625,
200
+ "loss": 0.1206,
201
+ "rewards/accuracies": 0.7124999761581421,
202
+ "rewards/chosen": 0.010690492577850819,
203
+ "rewards/margins": 0.008029913529753685,
204
+ "rewards/rejected": 0.0026605785824358463,
205
+ "step": 130
206
+ },
207
+ {
208
+ "epoch": 0.11,
209
+ "learning_rate": 4.997807075247147e-06,
210
+ "logits/chosen": 1.1520451307296753,
211
+ "logits/rejected": 1.0790607929229736,
212
+ "logps/chosen": -459.16485595703125,
213
+ "logps/rejected": -337.4107971191406,
214
+ "loss": 0.1476,
215
+ "rewards/accuracies": 0.71875,
216
+ "rewards/chosen": 0.011510485783219337,
217
+ "rewards/margins": 0.008873937651515007,
218
+ "rewards/rejected": 0.002636546967551112,
219
+ "step": 140
220
+ },
221
+ {
222
+ "epoch": 0.12,
223
+ "learning_rate": 4.993910125649561e-06,
224
+ "logits/chosen": 1.1496937274932861,
225
+ "logits/rejected": 1.0804468393325806,
226
+ "logps/chosen": -469.76507568359375,
227
+ "logps/rejected": -343.68109130859375,
228
+ "loss": 0.1393,
229
+ "rewards/accuracies": 0.706250011920929,
230
+ "rewards/chosen": 0.01237241830676794,
231
+ "rewards/margins": 0.01088970061391592,
232
+ "rewards/rejected": 0.0014827173436060548,
233
+ "step": 150
234
+ },
235
+ {
236
+ "epoch": 0.13,
237
+ "learning_rate": 4.988068499954578e-06,
238
+ "logits/chosen": 1.0382554531097412,
239
+ "logits/rejected": 0.9245996475219727,
240
+ "logps/chosen": -454.4856872558594,
241
+ "logps/rejected": -329.7301330566406,
242
+ "loss": 0.1229,
243
+ "rewards/accuracies": 0.65625,
244
+ "rewards/chosen": 0.011257833801209927,
245
+ "rewards/margins": 0.010106958448886871,
246
+ "rewards/rejected": 0.0011508769821375608,
247
+ "step": 160
248
+ },
249
+ {
250
+ "epoch": 0.14,
251
+ "learning_rate": 4.980286753286196e-06,
252
+ "logits/chosen": 1.1908769607543945,
253
+ "logits/rejected": 1.1219961643218994,
254
+ "logps/chosen": -479.1846618652344,
255
+ "logps/rejected": -360.87335205078125,
256
+ "loss": 0.1445,
257
+ "rewards/accuracies": 0.71875,
258
+ "rewards/chosen": 0.012364432215690613,
259
+ "rewards/margins": 0.017936835065484047,
260
+ "rewards/rejected": -0.0055724019184708595,
261
+ "step": 170
262
+ },
263
+ {
264
+ "epoch": 0.14,
265
+ "learning_rate": 4.970570953616383e-06,
266
+ "logits/chosen": 1.0914846658706665,
267
+ "logits/rejected": 1.023158311843872,
268
+ "logps/chosen": -457.4100646972656,
269
+ "logps/rejected": -361.01800537109375,
270
+ "loss": 0.1289,
271
+ "rewards/accuracies": 0.668749988079071,
272
+ "rewards/chosen": 0.009392337873578072,
273
+ "rewards/margins": 0.020237213000655174,
274
+ "rewards/rejected": -0.010844876058399677,
275
+ "step": 180
276
+ },
277
+ {
278
+ "epoch": 0.15,
279
+ "learning_rate": 4.958928677033465e-06,
280
+ "logits/chosen": 1.1615819931030273,
281
+ "logits/rejected": 1.0536638498306274,
282
+ "logps/chosen": -483.87261962890625,
283
+ "logps/rejected": -402.1483459472656,
284
+ "loss": 0.1285,
285
+ "rewards/accuracies": 0.737500011920929,
286
+ "rewards/chosen": 0.005232605151832104,
287
+ "rewards/margins": 0.028785843402147293,
288
+ "rewards/rejected": -0.023553235456347466,
289
+ "step": 190
290
+ },
291
+ {
292
+ "epoch": 0.16,
293
+ "learning_rate": 4.9453690018345144e-06,
294
+ "logits/chosen": 1.1142406463623047,
295
+ "logits/rejected": 1.008840799331665,
296
+ "logps/chosen": -462.1014709472656,
297
+ "logps/rejected": -354.94744873046875,
298
+ "loss": 0.132,
299
+ "rewards/accuracies": 0.637499988079071,
300
+ "rewards/chosen": 0.0019020054023712873,
301
+ "rewards/margins": 0.02114967629313469,
302
+ "rewards/rejected": -0.019247671589255333,
303
+ "step": 200
304
+ },
305
+ {
306
+ "epoch": 0.17,
307
+ "learning_rate": 4.9299025014463665e-06,
308
+ "logits/chosen": 1.0533361434936523,
309
+ "logits/rejected": 0.983782947063446,
310
+ "logps/chosen": -465.71258544921875,
311
+ "logps/rejected": -364.546875,
312
+ "loss": 0.1351,
313
+ "rewards/accuracies": 0.59375,
314
+ "rewards/chosen": 0.005812919698655605,
315
+ "rewards/margins": 0.0318826325237751,
316
+ "rewards/rejected": -0.02606971189379692,
317
+ "step": 210
318
+ },
319
+ {
320
+ "epoch": 0.18,
321
+ "learning_rate": 4.912541236180779e-06,
322
+ "logits/chosen": 1.0279268026351929,
323
+ "logits/rejected": 1.018845558166504,
324
+ "logps/chosen": -454.9180603027344,
325
+ "logps/rejected": -382.669677734375,
326
+ "loss": 0.144,
327
+ "rewards/accuracies": 0.6312500238418579,
328
+ "rewards/chosen": 0.007858935743570328,
329
+ "rewards/margins": 0.029104083776474,
330
+ "rewards/rejected": -0.02124514803290367,
331
+ "step": 220
332
+ },
333
+ {
334
+ "epoch": 0.18,
335
+ "learning_rate": 4.893298743830168e-06,
336
+ "logits/chosen": 1.093308925628662,
337
+ "logits/rejected": 0.9972928762435913,
338
+ "logps/chosen": -486.96795654296875,
339
+ "logps/rejected": -365.38653564453125,
340
+ "loss": 0.141,
341
+ "rewards/accuracies": 0.65625,
342
+ "rewards/chosen": 0.0009560141479596496,
343
+ "rewards/margins": 0.026852011680603027,
344
+ "rewards/rejected": -0.025895997881889343,
345
+ "step": 230
346
+ },
347
+ {
348
+ "epoch": 0.19,
349
+ "learning_rate": 4.8721900291112415e-06,
350
+ "logits/chosen": 1.1143492460250854,
351
+ "logits/rejected": 1.011824369430542,
352
+ "logps/chosen": -462.26202392578125,
353
+ "logps/rejected": -372.9997863769531,
354
+ "loss": 0.138,
355
+ "rewards/accuracies": 0.6875,
356
+ "rewards/chosen": 0.010432013310492039,
357
+ "rewards/margins": 0.028673967346549034,
358
+ "rewards/rejected": -0.01824195310473442,
359
+ "step": 240
360
+ },
361
+ {
362
+ "epoch": 0.2,
363
+ "learning_rate": 4.849231551964771e-06,
364
+ "logits/chosen": 1.0901610851287842,
365
+ "logits/rejected": 0.9668253660202026,
366
+ "logps/chosen": -450.9900817871094,
367
+ "logps/rejected": -366.1516418457031,
368
+ "loss": 0.121,
369
+ "rewards/accuracies": 0.5874999761581421,
370
+ "rewards/chosen": 0.013042752631008625,
371
+ "rewards/margins": 0.037192780524492264,
372
+ "rewards/rejected": -0.024150025099515915,
373
+ "step": 250
374
+ },
375
+ {
376
+ "epoch": 0.21,
377
+ "learning_rate": 4.824441214720629e-06,
378
+ "logits/chosen": 1.0770037174224854,
379
+ "logits/rejected": 0.9468703269958496,
380
+ "logps/chosen": -477.5690002441406,
381
+ "logps/rejected": -331.53826904296875,
382
+ "loss": 0.1342,
383
+ "rewards/accuracies": 0.6499999761581421,
384
+ "rewards/chosen": 0.01135487761348486,
385
+ "rewards/margins": 0.028281161561608315,
386
+ "rewards/rejected": -0.01692628487944603,
387
+ "step": 260
388
+ },
389
+ {
390
+ "epoch": 0.22,
391
+ "learning_rate": 4.7978383481380865e-06,
392
+ "logits/chosen": 1.0577478408813477,
393
+ "logits/rejected": 0.9135805368423462,
394
+ "logps/chosen": -453.3644104003906,
395
+ "logps/rejected": -318.5481872558594,
396
+ "loss": 0.1317,
397
+ "rewards/accuracies": 0.6937500238418579,
398
+ "rewards/chosen": 0.010126395151019096,
399
+ "rewards/margins": 0.028996825218200684,
400
+ "rewards/rejected": -0.018870430067181587,
401
+ "step": 270
402
+ },
403
+ {
404
+ "epoch": 0.22,
405
+ "learning_rate": 4.769443696332272e-06,
406
+ "logits/chosen": 1.1663763523101807,
407
+ "logits/rejected": 1.0382214784622192,
408
+ "logps/chosen": -473.4112243652344,
409
+ "logps/rejected": -363.4704284667969,
410
+ "loss": 0.1224,
411
+ "rewards/accuracies": 0.6937500238418579,
412
+ "rewards/chosen": 0.01480911485850811,
413
+ "rewards/margins": 0.03443712741136551,
414
+ "rewards/rejected": -0.019628014415502548,
415
+ "step": 280
416
+ },
417
+ {
418
+ "epoch": 0.23,
419
+ "learning_rate": 4.7392794005985324e-06,
420
+ "logits/chosen": 1.1509085893630981,
421
+ "logits/rejected": 1.0418548583984375,
422
+ "logps/chosen": -443.42169189453125,
423
+ "logps/rejected": -352.11248779296875,
424
+ "loss": 0.1454,
425
+ "rewards/accuracies": 0.65625,
426
+ "rewards/chosen": 0.01802745647728443,
427
+ "rewards/margins": 0.037281010299921036,
428
+ "rewards/rejected": -0.019253555685281754,
429
+ "step": 290
430
+ },
431
+ {
432
+ "epoch": 0.24,
433
+ "learning_rate": 4.707368982147318e-06,
434
+ "logits/chosen": 1.066710114479065,
435
+ "logits/rejected": 0.9809983372688293,
436
+ "logps/chosen": -453.10662841796875,
437
+ "logps/rejected": -366.565185546875,
438
+ "loss": 0.1345,
439
+ "rewards/accuracies": 0.6499999761581421,
440
+ "rewards/chosen": 0.012954890727996826,
441
+ "rewards/margins": 0.02726684883236885,
442
+ "rewards/rejected": -0.014311959967017174,
443
+ "step": 300
444
+ },
445
+ {
446
+ "epoch": 0.25,
447
+ "learning_rate": 4.673737323763048e-06,
448
+ "logits/chosen": 1.031423568725586,
449
+ "logits/rejected": 0.9241514205932617,
450
+ "logps/chosen": -459.67901611328125,
451
+ "logps/rejected": -367.7610778808594,
452
+ "loss": 0.1287,
453
+ "rewards/accuracies": 0.6187499761581421,
454
+ "rewards/chosen": 0.015265332534909248,
455
+ "rewards/margins": 0.025473449379205704,
456
+ "rewards/rejected": -0.01020811591297388,
457
+ "step": 310
458
+ },
459
+ {
460
+ "epoch": 0.26,
461
+ "learning_rate": 4.638410650401267e-06,
462
+ "logits/chosen": 1.131606101989746,
463
+ "logits/rejected": 1.0299171209335327,
464
+ "logps/chosen": -478.57403564453125,
465
+ "logps/rejected": -387.32318115234375,
466
+ "loss": 0.1008,
467
+ "rewards/accuracies": 0.731249988079071,
468
+ "rewards/chosen": 0.026522139087319374,
469
+ "rewards/margins": 0.04347400367259979,
470
+ "rewards/rejected": -0.01695186272263527,
471
+ "step": 320
472
+ },
473
+ {
474
+ "epoch": 0.26,
475
+ "learning_rate": 4.601416508739211e-06,
476
+ "logits/chosen": 1.1088566780090332,
477
+ "logits/rejected": 0.9741806983947754,
478
+ "logps/chosen": -460.74090576171875,
479
+ "logps/rejected": -381.3876953125,
480
+ "loss": 0.1276,
481
+ "rewards/accuracies": 0.71875,
482
+ "rewards/chosen": 0.019920390099287033,
483
+ "rewards/margins": 0.04846935719251633,
484
+ "rewards/rejected": -0.028548967093229294,
485
+ "step": 330
486
+ },
487
+ {
488
+ "epoch": 0.27,
489
+ "learning_rate": 4.562783745695738e-06,
490
+ "logits/chosen": 1.0776937007904053,
491
+ "logits/rejected": 0.9450405836105347,
492
+ "logps/chosen": -461.37115478515625,
493
+ "logps/rejected": -381.6752624511719,
494
+ "loss": 0.1173,
495
+ "rewards/accuracies": 0.7124999761581421,
496
+ "rewards/chosen": 0.01616843044757843,
497
+ "rewards/margins": 0.04438379779458046,
498
+ "rewards/rejected": -0.02821536734700203,
499
+ "step": 340
500
+ },
501
+ {
502
+ "epoch": 0.28,
503
+ "learning_rate": 4.522542485937369e-06,
504
+ "logits/chosen": 0.9827763438224792,
505
+ "logits/rejected": 0.854051947593689,
506
+ "logps/chosen": -490.47021484375,
507
+ "logps/rejected": -389.33197021484375,
508
+ "loss": 0.1149,
509
+ "rewards/accuracies": 0.7749999761581421,
510
+ "rewards/chosen": 0.018763616681098938,
511
+ "rewards/margins": 0.06531712412834167,
512
+ "rewards/rejected": -0.04655350744724274,
513
+ "step": 350
514
+ },
515
+ {
516
+ "epoch": 0.29,
517
+ "learning_rate": 4.4807241083879774e-06,
518
+ "logits/chosen": 1.0331618785858154,
519
+ "logits/rejected": 0.9459377527236938,
520
+ "logps/chosen": -487.89007568359375,
521
+ "logps/rejected": -382.6471862792969,
522
+ "loss": 0.1528,
523
+ "rewards/accuracies": 0.675000011920929,
524
+ "rewards/chosen": -0.005054169334471226,
525
+ "rewards/margins": 0.03303053602576256,
526
+ "rewards/rejected": -0.03808470442891121,
527
+ "step": 360
528
+ },
529
+ {
530
+ "epoch": 0.3,
531
+ "learning_rate": 4.437361221760449e-06,
532
+ "logits/chosen": 0.9949855804443359,
533
+ "logits/rejected": 0.8790602684020996,
534
+ "logps/chosen": -473.7728576660156,
535
+ "logps/rejected": -383.35467529296875,
536
+ "loss": 0.1473,
537
+ "rewards/accuracies": 0.6625000238418579,
538
+ "rewards/chosen": 0.020323097705841064,
539
+ "rewards/margins": 0.04043232277035713,
540
+ "rewards/rejected": -0.020109226927161217,
541
+ "step": 370
542
+ },
543
+ {
544
+ "epoch": 0.3,
545
+ "learning_rate": 4.3924876391293915e-06,
546
+ "logits/chosen": 1.079866647720337,
547
+ "logits/rejected": 0.9681353569030762,
548
+ "logps/chosen": -471.4029235839844,
549
+ "logps/rejected": -399.40728759765625,
550
+ "loss": 0.1079,
551
+ "rewards/accuracies": 0.71875,
552
+ "rewards/chosen": 0.024924132972955704,
553
+ "rewards/margins": 0.05287875980138779,
554
+ "rewards/rejected": -0.02795463241636753,
555
+ "step": 380
556
+ },
557
+ {
558
+ "epoch": 0.31,
559
+ "learning_rate": 4.346138351564711e-06,
560
+ "logits/chosen": 0.9959014654159546,
561
+ "logits/rejected": 0.9022513628005981,
562
+ "logps/chosen": -448.15350341796875,
563
+ "logps/rejected": -344.8275146484375,
564
+ "loss": 0.1143,
565
+ "rewards/accuracies": 0.668749988079071,
566
+ "rewards/chosen": 0.024763161316514015,
567
+ "rewards/margins": 0.03656725212931633,
568
+ "rewards/rejected": -0.011804090812802315,
569
+ "step": 390
570
+ },
571
+ {
572
+ "epoch": 0.32,
573
+ "learning_rate": 4.2983495008466285e-06,
574
+ "logits/chosen": 1.0768877267837524,
575
+ "logits/rejected": 1.0071511268615723,
576
+ "logps/chosen": -458.2196350097656,
577
+ "logps/rejected": -370.3987731933594,
578
+ "loss": 0.1363,
579
+ "rewards/accuracies": 0.668749988079071,
580
+ "rewards/chosen": 0.023127641528844833,
581
+ "rewards/margins": 0.03729063645005226,
582
+ "rewards/rejected": -0.014162993058562279,
583
+ "step": 400
584
+ },
585
+ {
586
+ "epoch": 0.33,
587
+ "learning_rate": 4.249158351283414e-06,
588
+ "logits/chosen": 1.0073800086975098,
589
+ "logits/rejected": 0.93903648853302,
590
+ "logps/chosen": -478.2962341308594,
591
+ "logps/rejected": -367.376953125,
592
+ "loss": 0.134,
593
+ "rewards/accuracies": 0.7124999761581421,
594
+ "rewards/chosen": 0.021456506103277206,
595
+ "rewards/margins": 0.04471339285373688,
596
+ "rewards/rejected": -0.02325688675045967,
597
+ "step": 410
598
+ },
599
+ {
600
+ "epoch": 0.34,
601
+ "learning_rate": 4.198603260653792e-06,
602
+ "logits/chosen": 0.9648386240005493,
603
+ "logits/rejected": 0.8528832197189331,
604
+ "logps/chosen": -476.27001953125,
605
+ "logps/rejected": -355.2597351074219,
606
+ "loss": 0.1368,
607
+ "rewards/accuracies": 0.6499999761581421,
608
+ "rewards/chosen": 0.018124157562851906,
609
+ "rewards/margins": 0.03248673677444458,
610
+ "rewards/rejected": -0.014362581074237823,
611
+ "step": 420
612
+ },
613
+ {
614
+ "epoch": 0.34,
615
+ "learning_rate": 4.146723650296701e-06,
616
+ "logits/chosen": 1.0027981996536255,
617
+ "logits/rejected": 0.8345006704330444,
618
+ "logps/chosen": -468.65771484375,
619
+ "logps/rejected": -377.0935363769531,
620
+ "loss": 0.1226,
621
+ "rewards/accuracies": 0.668749988079071,
622
+ "rewards/chosen": 0.01800473965704441,
623
+ "rewards/margins": 0.04834091290831566,
624
+ "rewards/rejected": -0.030336176976561546,
625
+ "step": 430
626
+ },
627
+ {
628
+ "epoch": 0.35,
629
+ "learning_rate": 4.093559974371725e-06,
630
+ "logits/chosen": 0.9172792434692383,
631
+ "logits/rejected": 0.8375406265258789,
632
+ "logps/chosen": -466.71405029296875,
633
+ "logps/rejected": -413.31903076171875,
634
+ "loss": 0.1127,
635
+ "rewards/accuracies": 0.71875,
636
+ "rewards/chosen": 0.005665500648319721,
637
+ "rewards/margins": 0.05071156471967697,
638
+ "rewards/rejected": -0.04504606872797012,
639
+ "step": 440
640
+ },
641
+ {
642
+ "epoch": 0.36,
643
+ "learning_rate": 4.039153688314146e-06,
644
+ "logits/chosen": 0.9984884262084961,
645
+ "logits/rejected": 0.8194684982299805,
646
+ "logps/chosen": -474.7478942871094,
647
+ "logps/rejected": -382.6129150390625,
648
+ "loss": 0.1281,
649
+ "rewards/accuracies": 0.6625000238418579,
650
+ "rewards/chosen": 0.0018371727783232927,
651
+ "rewards/margins": 0.054125331342220306,
652
+ "rewards/rejected": -0.052288156002759933,
653
+ "step": 450
654
+ },
655
+ {
656
+ "epoch": 0.37,
657
+ "learning_rate": 3.983547216509254e-06,
658
+ "logits/chosen": 0.9451436996459961,
659
+ "logits/rejected": 0.8137146830558777,
660
+ "logps/chosen": -462.51934814453125,
661
+ "logps/rejected": -392.1611328125,
662
+ "loss": 0.1388,
663
+ "rewards/accuracies": 0.637499988079071,
664
+ "rewards/chosen": 0.004337290767580271,
665
+ "rewards/margins": 0.0444575771689415,
666
+ "rewards/rejected": -0.04012028127908707,
667
+ "step": 460
668
+ },
669
+ {
670
+ "epoch": 0.38,
671
+ "learning_rate": 3.92678391921108e-06,
672
+ "logits/chosen": 0.9080266952514648,
673
+ "logits/rejected": 0.7958904504776001,
674
+ "logps/chosen": -459.026123046875,
675
+ "logps/rejected": -364.5742492675781,
676
+ "loss": 0.133,
677
+ "rewards/accuracies": 0.6312500238418579,
678
+ "rewards/chosen": 0.00981406681239605,
679
+ "rewards/margins": 0.04570726677775383,
680
+ "rewards/rejected": -0.03589319437742233,
681
+ "step": 470
682
+ },
683
+ {
684
+ "epoch": 0.38,
685
+ "learning_rate": 3.868908058731376e-06,
686
+ "logits/chosen": 1.0409977436065674,
687
+ "logits/rejected": 0.8651460409164429,
688
+ "logps/chosen": -467.898681640625,
689
+ "logps/rejected": -348.7823791503906,
690
+ "loss": 0.1441,
691
+ "rewards/accuracies": 0.625,
692
+ "rewards/chosen": 0.005097855813801289,
693
+ "rewards/margins": 0.03613067790865898,
694
+ "rewards/rejected": -0.03103281930088997,
695
+ "step": 480
696
+ },
697
+ {
698
+ "epoch": 0.39,
699
+ "learning_rate": 3.8099647649251984e-06,
700
+ "logits/chosen": 1.08036208152771,
701
+ "logits/rejected": 1.0272912979125977,
702
+ "logps/chosen": -471.68603515625,
703
+ "logps/rejected": -395.9718322753906,
704
+ "loss": 0.1202,
705
+ "rewards/accuracies": 0.7250000238418579,
706
+ "rewards/chosen": 0.01659775711596012,
707
+ "rewards/margins": 0.0396357998251915,
708
+ "rewards/rejected": -0.023038046434521675,
709
+ "step": 490
710
+ },
711
+ {
712
+ "epoch": 0.4,
713
+ "learning_rate": 3.7500000000000005e-06,
714
+ "logits/chosen": 1.0317163467407227,
715
+ "logits/rejected": 0.8968293070793152,
716
+ "logps/chosen": -467.38671875,
717
+ "logps/rejected": -378.4510192871094,
718
+ "loss": 0.1347,
719
+ "rewards/accuracies": 0.6937500238418579,
720
+ "rewards/chosen": 0.018395045772194862,
721
+ "rewards/margins": 0.029253458604216576,
722
+ "rewards/rejected": -0.010858410969376564,
723
+ "step": 500
724
+ },
725
+ {
726
+ "epoch": 0.41,
727
+ "learning_rate": 3.689060522675689e-06,
728
+ "logits/chosen": 0.9803446531295776,
729
+ "logits/rejected": 0.8904365301132202,
730
+ "logps/chosen": -434.4021911621094,
731
+ "logps/rejected": -359.31134033203125,
732
+ "loss": 0.125,
733
+ "rewards/accuracies": 0.675000011920929,
734
+ "rewards/chosen": 0.019461948424577713,
735
+ "rewards/margins": 0.04449521377682686,
736
+ "rewards/rejected": -0.025033265352249146,
737
+ "step": 510
738
+ },
739
+ {
740
+ "epoch": 0.42,
741
+ "learning_rate": 3.627193851723577e-06,
742
+ "logits/chosen": 0.9237390756607056,
743
+ "logits/rejected": 0.758080780506134,
744
+ "logps/chosen": -461.304443359375,
745
+ "logps/rejected": -363.09490966796875,
746
+ "loss": 0.1178,
747
+ "rewards/accuracies": 0.7124999761581421,
748
+ "rewards/chosen": 0.015167142264544964,
749
+ "rewards/margins": 0.053153105080127716,
750
+ "rewards/rejected": -0.03798595815896988,
751
+ "step": 520
752
+ },
753
+ {
754
+ "epoch": 0.42,
755
+ "learning_rate": 3.564448228912682e-06,
756
+ "logits/chosen": 0.8934131860733032,
757
+ "logits/rejected": 0.722560703754425,
758
+ "logps/chosen": -473.1156311035156,
759
+ "logps/rejected": -344.38458251953125,
760
+ "loss": 0.1304,
761
+ "rewards/accuracies": 0.612500011920929,
762
+ "rewards/chosen": 0.011030582711100578,
763
+ "rewards/margins": 0.026756322011351585,
764
+ "rewards/rejected": -0.015725741162896156,
765
+ "step": 530
766
+ },
767
+ {
768
+ "epoch": 0.43,
769
+ "learning_rate": 3.5008725813922383e-06,
770
+ "logits/chosen": 0.9554192423820496,
771
+ "logits/rejected": 0.8335037231445312,
772
+ "logps/chosen": -447.8621520996094,
773
+ "logps/rejected": -343.98187255859375,
774
+ "loss": 0.1268,
775
+ "rewards/accuracies": 0.7124999761581421,
776
+ "rewards/chosen": 0.019363736733794212,
777
+ "rewards/margins": 0.048774898052215576,
778
+ "rewards/rejected": -0.029411161318421364,
779
+ "step": 540
780
+ },
781
+ {
782
+ "epoch": 0.44,
783
+ "learning_rate": 3.436516483539781e-06,
784
+ "logits/chosen": 0.9713459014892578,
785
+ "logits/rejected": 0.851312518119812,
786
+ "logps/chosen": -462.87078857421875,
787
+ "logps/rejected": -350.909912109375,
788
+ "loss": 0.1226,
789
+ "rewards/accuracies": 0.6625000238418579,
790
+ "rewards/chosen": 0.021459558978676796,
791
+ "rewards/margins": 0.037843555212020874,
792
+ "rewards/rejected": -0.016383999958634377,
793
+ "step": 550
794
+ },
795
+ {
796
+ "epoch": 0.45,
797
+ "learning_rate": 3.3714301183045382e-06,
798
+ "logits/chosen": 0.9322063326835632,
799
+ "logits/rejected": 0.8468168377876282,
800
+ "logps/chosen": -451.48394775390625,
801
+ "logps/rejected": -363.7984313964844,
802
+ "loss": 0.1178,
803
+ "rewards/accuracies": 0.7437499761581421,
804
+ "rewards/chosen": 0.025396600365638733,
805
+ "rewards/margins": 0.04970560222864151,
806
+ "rewards/rejected": -0.024309005588293076,
807
+ "step": 560
808
+ },
809
+ {
810
+ "epoch": 0.46,
811
+ "learning_rate": 3.3056642380762783e-06,
812
+ "logits/chosen": 0.9000357389450073,
813
+ "logits/rejected": 0.7963394522666931,
814
+ "logps/chosen": -429.16839599609375,
815
+ "logps/rejected": -349.9974670410156,
816
+ "loss": 0.1462,
817
+ "rewards/accuracies": 0.6312500238418579,
818
+ "rewards/chosen": 0.019953671842813492,
819
+ "rewards/margins": 0.03375135362148285,
820
+ "rewards/rejected": -0.013797680847346783,
821
+ "step": 570
822
+ },
823
+ {
824
+ "epoch": 0.46,
825
+ "learning_rate": 3.2392701251101172e-06,
826
+ "logits/chosen": 0.8540957570075989,
827
+ "logits/rejected": 0.801906406879425,
828
+ "logps/chosen": -481.28302001953125,
829
+ "logps/rejected": -398.15185546875,
830
+ "loss": 0.1214,
831
+ "rewards/accuracies": 0.6499999761581421,
832
+ "rewards/chosen": 0.023893360048532486,
833
+ "rewards/margins": 0.04255540296435356,
834
+ "rewards/rejected": -0.018662041053175926,
835
+ "step": 580
836
+ },
837
+ {
838
+ "epoch": 0.47,
839
+ "learning_rate": 3.1722995515381644e-06,
840
+ "logits/chosen": 0.9093323945999146,
841
+ "logits/rejected": 0.7911072373390198,
842
+ "logps/chosen": -480.9325256347656,
843
+ "logps/rejected": -374.56549072265625,
844
+ "loss": 0.128,
845
+ "rewards/accuracies": 0.65625,
846
+ "rewards/chosen": 0.011839361861348152,
847
+ "rewards/margins": 0.038532059639692307,
848
+ "rewards/rejected": -0.026692699640989304,
849
+ "step": 590
850
+ },
851
+ {
852
+ "epoch": 0.48,
853
+ "learning_rate": 3.1048047389991693e-06,
854
+ "logits/chosen": 0.9539586901664734,
855
+ "logits/rejected": 0.832242488861084,
856
+ "logps/chosen": -478.30950927734375,
857
+ "logps/rejected": -388.95770263671875,
858
+ "loss": 0.1141,
859
+ "rewards/accuracies": 0.7124999761581421,
860
+ "rewards/chosen": 0.013863136060535908,
861
+ "rewards/margins": 0.05301787331700325,
862
+ "rewards/rejected": -0.03915473818778992,
863
+ "step": 600
864
+ },
865
+ {
866
+ "epoch": 0.49,
867
+ "learning_rate": 3.0368383179176584e-06,
868
+ "logits/chosen": 0.8396176099777222,
869
+ "logits/rejected": 0.7142072319984436,
870
+ "logps/chosen": -461.75445556640625,
871
+ "logps/rejected": -371.7435607910156,
872
+ "loss": 0.1417,
873
+ "rewards/accuracies": 0.675000011920929,
874
+ "rewards/chosen": 0.02106183022260666,
875
+ "rewards/margins": 0.050181370228528976,
876
+ "rewards/rejected": -0.029119547456502914,
877
+ "step": 610
878
+ },
879
+ {
880
+ "epoch": 0.5,
881
+ "learning_rate": 2.9684532864643123e-06,
882
+ "logits/chosen": 0.8672172427177429,
883
+ "logits/rejected": 0.7649189233779907,
884
+ "logps/chosen": -483.3880310058594,
885
+ "logps/rejected": -401.4527587890625,
886
+ "loss": 0.1144,
887
+ "rewards/accuracies": 0.6625000238418579,
888
+ "rewards/chosen": 0.0037638209760189056,
889
+ "rewards/margins": 0.04919921234250069,
890
+ "rewards/rejected": -0.04543538764119148,
891
+ "step": 620
892
+ },
893
+ {
894
+ "epoch": 0.5,
895
+ "learning_rate": 2.8997029692295875e-06,
896
+ "logits/chosen": 0.8982945680618286,
897
+ "logits/rejected": 0.7633360028266907,
898
+ "logps/chosen": -453.81243896484375,
899
+ "logps/rejected": -408.51947021484375,
900
+ "loss": 0.1196,
901
+ "rewards/accuracies": 0.6812499761581421,
902
+ "rewards/chosen": 0.009529736824333668,
903
+ "rewards/margins": 0.05482906103134155,
904
+ "rewards/rejected": -0.04529931768774986,
905
+ "step": 630
906
+ },
907
+ {
908
+ "epoch": 0.51,
909
+ "learning_rate": 2.8306409756428067e-06,
910
+ "logits/chosen": 0.9960956573486328,
911
+ "logits/rejected": 0.7809866666793823,
912
+ "logps/chosen": -486.93914794921875,
913
+ "logps/rejected": -397.1849365234375,
914
+ "loss": 0.0939,
915
+ "rewards/accuracies": 0.75,
916
+ "rewards/chosen": 0.016234764829277992,
917
+ "rewards/margins": 0.06638605892658234,
918
+ "rewards/rejected": -0.05015129595994949,
919
+ "step": 640
920
+ },
921
+ {
922
+ "epoch": 0.52,
923
+ "learning_rate": 2.761321158169134e-06,
924
+ "logits/chosen": 0.9853866696357727,
925
+ "logits/rejected": 0.7747653722763062,
926
+ "logps/chosen": -485.00543212890625,
927
+ "logps/rejected": -431.0743103027344,
928
+ "loss": 0.117,
929
+ "rewards/accuracies": 0.737500011920929,
930
+ "rewards/chosen": 0.013629300519824028,
931
+ "rewards/margins": 0.07910068333148956,
932
+ "rewards/rejected": -0.06547138094902039,
933
+ "step": 650
934
+ },
935
+ {
936
+ "epoch": 0.53,
937
+ "learning_rate": 2.6917975703170466e-06,
938
+ "logits/chosen": 0.9036257863044739,
939
+ "logits/rejected": 0.7418056130409241,
940
+ "logps/chosen": -483.9380798339844,
941
+ "logps/rejected": -409.62030029296875,
942
+ "loss": 0.1173,
943
+ "rewards/accuracies": 0.675000011920929,
944
+ "rewards/chosen": -0.0023396513424813747,
945
+ "rewards/margins": 0.05056001991033554,
946
+ "rewards/rejected": -0.052899666130542755,
947
+ "step": 660
948
+ },
949
+ {
950
+ "epoch": 0.54,
951
+ "learning_rate": 2.6221244244890336e-06,
952
+ "logits/chosen": 0.9365898370742798,
953
+ "logits/rejected": 0.7452448606491089,
954
+ "logps/chosen": -471.79486083984375,
955
+ "logps/rejected": -403.4354248046875,
956
+ "loss": 0.1282,
957
+ "rewards/accuracies": 0.675000011920929,
958
+ "rewards/chosen": 0.00022033098503015935,
959
+ "rewards/margins": 0.04656386375427246,
960
+ "rewards/rejected": -0.04634353518486023,
961
+ "step": 670
962
+ },
963
+ {
964
+ "epoch": 0.54,
965
+ "learning_rate": 2.5523560497083927e-06,
966
+ "logits/chosen": 0.8479242324829102,
967
+ "logits/rejected": 0.7285875082015991,
968
+ "logps/chosen": -468.72674560546875,
969
+ "logps/rejected": -373.24932861328125,
970
+ "loss": 0.1469,
971
+ "rewards/accuracies": 0.65625,
972
+ "rewards/chosen": -0.005256607197225094,
973
+ "rewards/margins": 0.03616983816027641,
974
+ "rewards/rejected": -0.041426438838243484,
975
+ "step": 680
976
+ },
977
+ {
978
+ "epoch": 0.55,
979
+ "learning_rate": 2.482546849255096e-06,
980
+ "logits/chosen": 0.865423321723938,
981
+ "logits/rejected": 0.7504838705062866,
982
+ "logps/chosen": -451.1360778808594,
983
+ "logps/rejected": -367.5789489746094,
984
+ "loss": 0.1223,
985
+ "rewards/accuracies": 0.675000011920929,
986
+ "rewards/chosen": 0.014548267237842083,
987
+ "rewards/margins": 0.04971490055322647,
988
+ "rewards/rejected": -0.035166628658771515,
989
+ "step": 690
990
+ },
991
+ {
992
+ "epoch": 0.56,
993
+ "learning_rate": 2.4127512582437486e-06,
994
+ "logits/chosen": 0.9130813479423523,
995
+ "logits/rejected": 0.7521572709083557,
996
+ "logps/chosen": -467.348388671875,
997
+ "logps/rejected": -368.24505615234375,
998
+ "loss": 0.1175,
999
+ "rewards/accuracies": 0.6812499761581421,
1000
+ "rewards/chosen": 0.015271159820258617,
1001
+ "rewards/margins": 0.05196915939450264,
1002
+ "rewards/rejected": -0.036698006093502045,
1003
+ "step": 700
1004
+ },
1005
+ {
1006
+ "epoch": 0.57,
1007
+ "learning_rate": 2.3430237011767166e-06,
1008
+ "logits/chosen": 0.8675438165664673,
1009
+ "logits/rejected": 0.7256118655204773,
1010
+ "logps/chosen": -460.74560546875,
1011
+ "logps/rejected": -374.3150329589844,
1012
+ "loss": 0.1129,
1013
+ "rewards/accuracies": 0.6499999761581421,
1014
+ "rewards/chosen": 0.013036970980465412,
1015
+ "rewards/margins": 0.03715645894408226,
1016
+ "rewards/rejected": -0.024119490757584572,
1017
+ "step": 710
1018
+ },
1019
+ {
1020
+ "epoch": 0.58,
1021
+ "learning_rate": 2.2734185495055503e-06,
1022
+ "logits/chosen": 0.935188889503479,
1023
+ "logits/rejected": 0.7854949831962585,
1024
+ "logps/chosen": -479.2552795410156,
1025
+ "logps/rejected": -388.42626953125,
1026
+ "loss": 0.1274,
1027
+ "rewards/accuracies": 0.699999988079071,
1028
+ "rewards/chosen": 0.014768016524612904,
1029
+ "rewards/margins": 0.04487290605902672,
1030
+ "rewards/rejected": -0.03010489046573639,
1031
+ "step": 720
1032
+ },
1033
+ {
1034
+ "epoch": 0.58,
1035
+ "learning_rate": 2.2039900792337477e-06,
1036
+ "logits/chosen": 0.9271212816238403,
1037
+ "logits/rejected": 0.8139986991882324,
1038
+ "logps/chosen": -468.4580993652344,
1039
+ "logps/rejected": -374.9727783203125,
1040
+ "loss": 0.111,
1041
+ "rewards/accuracies": 0.675000011920929,
1042
+ "rewards/chosen": 0.016965588554739952,
1043
+ "rewards/margins": 0.04319929704070091,
1044
+ "rewards/rejected": -0.02623371221125126,
1045
+ "step": 730
1046
+ },
1047
+ {
1048
+ "epoch": 0.59,
1049
+ "learning_rate": 2.134792428593971e-06,
1050
+ "logits/chosen": 0.9355592727661133,
1051
+ "logits/rejected": 0.7876889705657959,
1052
+ "logps/chosen": -453.23345947265625,
1053
+ "logps/rejected": -374.57196044921875,
1054
+ "loss": 0.135,
1055
+ "rewards/accuracies": 0.612500011920929,
1056
+ "rewards/chosen": 0.007375324610620737,
1057
+ "rewards/margins": 0.034845925867557526,
1058
+ "rewards/rejected": -0.027470603585243225,
1059
+ "step": 740
1060
+ },
1061
+ {
1062
+ "epoch": 0.6,
1063
+ "learning_rate": 2.0658795558326745e-06,
1064
+ "logits/chosen": 0.9382299184799194,
1065
+ "logits/rejected": 0.7689005732536316,
1066
+ "logps/chosen": -443.07867431640625,
1067
+ "logps/rejected": -385.6949157714844,
1068
+ "loss": 0.1109,
1069
+ "rewards/accuracies": 0.706250011920929,
1070
+ "rewards/chosen": 0.013460439629852772,
1071
+ "rewards/margins": 0.04786471277475357,
1072
+ "rewards/rejected": -0.034404270350933075,
1073
+ "step": 750
1074
+ },
1075
+ {
1076
+ "epoch": 0.61,
1077
+ "learning_rate": 1.997305197135089e-06,
1078
+ "logits/chosen": 0.8568140864372253,
1079
+ "logits/rejected": 0.6882631182670593,
1080
+ "logps/chosen": -460.6326599121094,
1081
+ "logps/rejected": -352.6177673339844,
1082
+ "loss": 0.1218,
1083
+ "rewards/accuracies": 0.699999988079071,
1084
+ "rewards/chosen": 0.011636492796242237,
1085
+ "rewards/margins": 0.06006443500518799,
1086
+ "rewards/rejected": -0.048427946865558624,
1087
+ "step": 760
1088
+ },
1089
+ {
1090
+ "epoch": 0.62,
1091
+ "learning_rate": 1.9291228247233607e-06,
1092
+ "logits/chosen": 0.8900226354598999,
1093
+ "logits/rejected": 0.7789937257766724,
1094
+ "logps/chosen": -469.8233337402344,
1095
+ "logps/rejected": -384.80474853515625,
1096
+ "loss": 0.1273,
1097
+ "rewards/accuracies": 0.643750011920929,
1098
+ "rewards/chosen": 0.00031064721406437457,
1099
+ "rewards/margins": 0.04238821938633919,
1100
+ "rewards/rejected": -0.042077574878931046,
1101
+ "step": 770
1102
+ },
1103
+ {
1104
+ "epoch": 0.62,
1105
+ "learning_rate": 1.8613856051605242e-06,
1106
+ "logits/chosen": 0.9109789133071899,
1107
+ "logits/rejected": 0.7407966256141663,
1108
+ "logps/chosen": -455.23309326171875,
1109
+ "logps/rejected": -355.62457275390625,
1110
+ "loss": 0.1256,
1111
+ "rewards/accuracies": 0.637499988079071,
1112
+ "rewards/chosen": 0.007995473220944405,
1113
+ "rewards/margins": 0.03913170099258423,
1114
+ "rewards/rejected": -0.03113623335957527,
1115
+ "step": 780
1116
+ },
1117
+ {
1118
+ "epoch": 0.63,
1119
+ "learning_rate": 1.7941463578928088e-06,
1120
+ "logits/chosen": 0.9689297676086426,
1121
+ "logits/rejected": 0.8149408102035522,
1122
+ "logps/chosen": -462.6448669433594,
1123
+ "logps/rejected": -381.04473876953125,
1124
+ "loss": 0.1324,
1125
+ "rewards/accuracies": 0.6937500238418579,
1126
+ "rewards/chosen": 0.009871283546090126,
1127
+ "rewards/margins": 0.04489940404891968,
1128
+ "rewards/rejected": -0.0350281223654747,
1129
+ "step": 790
1130
+ },
1131
+ {
1132
+ "epoch": 0.64,
1133
+ "learning_rate": 1.7274575140626318e-06,
1134
+ "logits/chosen": 0.9453274011611938,
1135
+ "logits/rejected": 0.8604361414909363,
1136
+ "logps/chosen": -476.329833984375,
1137
+ "logps/rejected": -395.4358215332031,
1138
+ "loss": 0.1203,
1139
+ "rewards/accuracies": 0.675000011920929,
1140
+ "rewards/chosen": 0.011838543228805065,
1141
+ "rewards/margins": 0.04980830103158951,
1142
+ "rewards/rejected": -0.03796975687146187,
1143
+ "step": 800
1144
+ },
1145
+ {
1146
+ "epoch": 0.65,
1147
+ "learning_rate": 1.661371075624363e-06,
1148
+ "logits/chosen": 0.8852574229240417,
1149
+ "logits/rejected": 0.7384047508239746,
1150
+ "logps/chosen": -475.7542419433594,
1151
+ "logps/rejected": -397.722900390625,
1152
+ "loss": 0.1191,
1153
+ "rewards/accuracies": 0.643750011920929,
1154
+ "rewards/chosen": 0.0070769889280200005,
1155
+ "rewards/margins": 0.04614837095141411,
1156
+ "rewards/rejected": -0.03907138481736183,
1157
+ "step": 810
1158
+ },
1159
+ {
1160
+ "epoch": 0.66,
1161
+ "learning_rate": 1.5959385747947697e-06,
1162
+ "logits/chosen": 0.8751562237739563,
1163
+ "logits/rejected": 0.7935196161270142,
1164
+ "logps/chosen": -469.99285888671875,
1165
+ "logps/rejected": -382.78521728515625,
1166
+ "loss": 0.1402,
1167
+ "rewards/accuracies": 0.65625,
1168
+ "rewards/chosen": 0.005080102477222681,
1169
+ "rewards/margins": 0.040591780096292496,
1170
+ "rewards/rejected": -0.035511672496795654,
1171
+ "step": 820
1172
+ },
1173
+ {
1174
+ "epoch": 0.66,
1175
+ "learning_rate": 1.5312110338697427e-06,
1176
+ "logits/chosen": 0.8930953741073608,
1177
+ "logits/rejected": 0.7850891351699829,
1178
+ "logps/chosen": -459.15869140625,
1179
+ "logps/rejected": -398.9571838378906,
1180
+ "loss": 0.1006,
1181
+ "rewards/accuracies": 0.6812499761581421,
1182
+ "rewards/chosen": 0.012067661620676517,
1183
+ "rewards/margins": 0.05800219625234604,
1184
+ "rewards/rejected": -0.045934535562992096,
1185
+ "step": 830
1186
+ },
1187
+ {
1188
+ "epoch": 0.67,
1189
+ "learning_rate": 1.467238925438646e-06,
1190
+ "logits/chosen": 0.8995454907417297,
1191
+ "logits/rejected": 0.8098067045211792,
1192
+ "logps/chosen": -474.0658264160156,
1193
+ "logps/rejected": -388.6312255859375,
1194
+ "loss": 0.124,
1195
+ "rewards/accuracies": 0.7124999761581421,
1196
+ "rewards/chosen": 0.018568776547908783,
1197
+ "rewards/margins": 0.056005608290433884,
1198
+ "rewards/rejected": -0.0374368354678154,
1199
+ "step": 840
1200
+ },
1201
+ {
1202
+ "epoch": 0.68,
1203
+ "learning_rate": 1.4040721330273063e-06,
1204
+ "logits/chosen": 0.8805915117263794,
1205
+ "logits/rejected": 0.7850027680397034,
1206
+ "logps/chosen": -463.780029296875,
1207
+ "logps/rejected": -406.8705139160156,
1208
+ "loss": 0.1189,
1209
+ "rewards/accuracies": 0.6625000238418579,
1210
+ "rewards/chosen": 0.010667243972420692,
1211
+ "rewards/margins": 0.050372637808322906,
1212
+ "rewards/rejected": -0.039705388247966766,
1213
+ "step": 850
1214
+ },
1215
+ {
1216
+ "epoch": 0.69,
1217
+ "learning_rate": 1.3417599122003464e-06,
1218
+ "logits/chosen": 0.8984284400939941,
1219
+ "logits/rejected": 0.7516420483589172,
1220
+ "logps/chosen": -491.55694580078125,
1221
+ "logps/rejected": -411.80413818359375,
1222
+ "loss": 0.1019,
1223
+ "rewards/accuracies": 0.668749988079071,
1224
+ "rewards/chosen": 0.01107702311128378,
1225
+ "rewards/margins": 0.06236129254102707,
1226
+ "rewards/rejected": -0.051284272223711014,
1227
+ "step": 860
1228
+ },
1229
+ {
1230
+ "epoch": 0.7,
1231
+ "learning_rate": 1.280350852153168e-06,
1232
+ "logits/chosen": 0.8879778981208801,
1233
+ "logits/rejected": 0.7307626008987427,
1234
+ "logps/chosen": -487.8158264160156,
1235
+ "logps/rejected": -379.3349609375,
1236
+ "loss": 0.1061,
1237
+ "rewards/accuracies": 0.7437499761581421,
1238
+ "rewards/chosen": 0.012510605156421661,
1239
+ "rewards/margins": 0.04596661403775215,
1240
+ "rewards/rejected": -0.03345600515604019,
1241
+ "step": 870
1242
+ },
1243
+ {
1244
+ "epoch": 0.7,
1245
+ "learning_rate": 1.2198928378235717e-06,
1246
+ "logits/chosen": 0.9071391224861145,
1247
+ "logits/rejected": 0.7923253774642944,
1248
+ "logps/chosen": -485.8133850097656,
1249
+ "logps/rejected": -390.81103515625,
1250
+ "loss": 0.1293,
1251
+ "rewards/accuracies": 0.706250011920929,
1252
+ "rewards/chosen": 0.007225564680993557,
1253
+ "rewards/margins": 0.057215720415115356,
1254
+ "rewards/rejected": -0.049990154802799225,
1255
+ "step": 880
1256
+ },
1257
+ {
1258
+ "epoch": 0.71,
1259
+ "learning_rate": 1.160433012552508e-06,
1260
+ "logits/chosen": 0.7823120951652527,
1261
+ "logits/rejected": 0.7412742376327515,
1262
+ "logps/chosen": -452.2621154785156,
1263
+ "logps/rejected": -397.30126953125,
1264
+ "loss": 0.1244,
1265
+ "rewards/accuracies": 0.668749988079071,
1266
+ "rewards/chosen": 0.0017619800055399537,
1267
+ "rewards/margins": 0.05425364896655083,
1268
+ "rewards/rejected": -0.05249166488647461,
1269
+ "step": 890
1270
+ },
1271
+ {
1272
+ "epoch": 0.72,
1273
+ "learning_rate": 1.1020177413231334e-06,
1274
+ "logits/chosen": 0.8177992105484009,
1275
+ "logits/rejected": 0.724165141582489,
1276
+ "logps/chosen": -465.94244384765625,
1277
+ "logps/rejected": -404.88641357421875,
1278
+ "loss": 0.1332,
1279
+ "rewards/accuracies": 0.7124999761581421,
1280
+ "rewards/chosen": 0.007125381380319595,
1281
+ "rewards/margins": 0.05769032984972,
1282
+ "rewards/rejected": -0.05056494474411011,
1283
+ "step": 900
1284
+ },
1285
+ {
1286
+ "epoch": 0.73,
1287
+ "learning_rate": 1.0446925746067768e-06,
1288
+ "logits/chosen": 0.9341510534286499,
1289
+ "logits/rejected": 0.7330499887466431,
1290
+ "logps/chosen": -458.97003173828125,
1291
+ "logps/rejected": -380.50836181640625,
1292
+ "loss": 0.121,
1293
+ "rewards/accuracies": 0.6875,
1294
+ "rewards/chosen": 0.006901240907609463,
1295
+ "rewards/margins": 0.05187239125370979,
1296
+ "rewards/rejected": -0.044971149414777756,
1297
+ "step": 910
1298
+ },
1299
+ {
1300
+ "epoch": 0.74,
1301
+ "learning_rate": 9.88502212844063e-07,
1302
+ "logits/chosen": 0.8761428594589233,
1303
+ "logits/rejected": 0.8241461515426636,
1304
+ "logps/chosen": -460.31109619140625,
1305
+ "logps/rejected": -379.6416015625,
1306
+ "loss": 0.105,
1307
+ "rewards/accuracies": 0.6937500238418579,
1308
+ "rewards/chosen": 0.009546898305416107,
1309
+ "rewards/margins": 0.046010397374629974,
1310
+ "rewards/rejected": -0.036463502794504166,
1311
+ "step": 920
1312
+ },
1313
+ {
1314
+ "epoch": 0.74,
1315
+ "learning_rate": 9.334904715888496e-07,
1316
+ "logits/chosen": 0.9266616702079773,
1317
+ "logits/rejected": 0.7574371695518494,
1318
+ "logps/chosen": -491.9678649902344,
1319
+ "logps/rejected": -402.0565490722656,
1320
+ "loss": 0.1298,
1321
+ "rewards/accuracies": 0.6875,
1322
+ "rewards/chosen": 0.0036029263865202665,
1323
+ "rewards/margins": 0.057801950722932816,
1324
+ "rewards/rejected": -0.05419902130961418,
1325
+ "step": 930
1326
+ },
1327
+ {
1328
+ "epoch": 0.75,
1329
+ "learning_rate": 8.797002473421729e-07,
1330
+ "logits/chosen": 0.8647912740707397,
1331
+ "logits/rejected": 0.6777385473251343,
1332
+ "logps/chosen": -461.28460693359375,
1333
+ "logps/rejected": -378.638427734375,
1334
+ "loss": 0.1206,
1335
+ "rewards/accuracies": 0.643750011920929,
1336
+ "rewards/chosen": 0.017040668055415154,
1337
+ "rewards/margins": 0.04332052543759346,
1338
+ "rewards/rejected": -0.02627985179424286,
1339
+ "step": 940
1340
+ },
1341
+ {
1342
+ "epoch": 0.76,
1343
+ "learning_rate": 8.271734841028553e-07,
1344
+ "logits/chosen": 0.9620984196662903,
1345
+ "logits/rejected": 0.7840194702148438,
1346
+ "logps/chosen": -478.987060546875,
1347
+ "logps/rejected": -392.83966064453125,
1348
+ "loss": 0.129,
1349
+ "rewards/accuracies": 0.7124999761581421,
1350
+ "rewards/chosen": 0.015017243102192879,
1351
+ "rewards/margins": 0.05055858939886093,
1352
+ "rewards/rejected": -0.0355413481593132,
1353
+ "step": 950
1354
+ },
1355
+ {
1356
+ "epoch": 0.77,
1357
+ "learning_rate": 7.759511406608255e-07,
1358
+ "logits/chosen": 0.8557583689689636,
1359
+ "logits/rejected": 0.6884294748306274,
1360
+ "logps/chosen": -492.0303649902344,
1361
+ "logps/rejected": -396.324462890625,
1362
+ "loss": 0.1418,
1363
+ "rewards/accuracies": 0.643750011920929,
1364
+ "rewards/chosen": 0.004057017620652914,
1365
+ "rewards/margins": 0.02802330255508423,
1366
+ "rewards/rejected": -0.023966282606124878,
1367
+ "step": 960
1368
+ },
1369
+ {
1370
+ "epoch": 0.78,
1371
+ "learning_rate": 7.260731586586983e-07,
1372
+ "logits/chosen": 0.9453495144844055,
1373
+ "logits/rejected": 0.8003050088882446,
1374
+ "logps/chosen": -483.55792236328125,
1375
+ "logps/rejected": -375.1344909667969,
1376
+ "loss": 0.1258,
1377
+ "rewards/accuracies": 0.6625000238418579,
1378
+ "rewards/chosen": 0.0010669174371287227,
1379
+ "rewards/margins": 0.046425946056842804,
1380
+ "rewards/rejected": -0.04535902291536331,
1381
+ "step": 970
1382
+ },
1383
+ {
1384
+ "epoch": 0.78,
1385
+ "learning_rate": 6.775784314464717e-07,
1386
+ "logits/chosen": 0.8808594942092896,
1387
+ "logits/rejected": 0.7378624081611633,
1388
+ "logps/chosen": -463.3553771972656,
1389
+ "logps/rejected": -356.5357666015625,
1390
+ "loss": 0.1174,
1391
+ "rewards/accuracies": 0.6875,
1392
+ "rewards/chosen": 0.007392757572233677,
1393
+ "rewards/margins": 0.05343680828809738,
1394
+ "rewards/rejected": -0.046044059097766876,
1395
+ "step": 980
1396
+ },
1397
+ {
1398
+ "epoch": 0.79,
1399
+ "learning_rate": 6.305047737536707e-07,
1400
+ "logits/chosen": 0.905502200126648,
1401
+ "logits/rejected": 0.7184777855873108,
1402
+ "logps/chosen": -479.0455627441406,
1403
+ "logps/rejected": -375.6910095214844,
1404
+ "loss": 0.1193,
1405
+ "rewards/accuracies": 0.6812499761581421,
1406
+ "rewards/chosen": 0.006599182728677988,
1407
+ "rewards/margins": 0.0473155751824379,
1408
+ "rewards/rejected": -0.040716394782066345,
1409
+ "step": 990
1410
+ },
1411
+ {
1412
+ "epoch": 0.8,
1413
+ "learning_rate": 5.848888922025553e-07,
1414
+ "logits/chosen": 0.9522923231124878,
1415
+ "logits/rejected": 0.8169105648994446,
1416
+ "logps/chosen": -464.9600524902344,
1417
+ "logps/rejected": -401.93902587890625,
1418
+ "loss": 0.1295,
1419
+ "rewards/accuracies": 0.6187499761581421,
1420
+ "rewards/chosen": 0.014724774286150932,
1421
+ "rewards/margins": 0.05533941462635994,
1422
+ "rewards/rejected": -0.04061463475227356,
1423
+ "step": 1000
1424
+ },
1425
+ {
1426
+ "epoch": 0.81,
1427
+ "learning_rate": 5.407663566854008e-07,
1428
+ "logits/chosen": 0.9025578498840332,
1429
+ "logits/rejected": 0.775783360004425,
1430
+ "logps/chosen": -468.3810119628906,
1431
+ "logps/rejected": -405.38909912109375,
1432
+ "loss": 0.1254,
1433
+ "rewards/accuracies": 0.6812499761581421,
1434
+ "rewards/chosen": 0.008423840627074242,
1435
+ "rewards/margins": 0.041560642421245575,
1436
+ "rewards/rejected": -0.03313680365681648,
1437
+ "step": 1010
1438
+ },
1439
+ {
1440
+ "epoch": 0.82,
1441
+ "learning_rate": 4.981715726281666e-07,
1442
+ "logits/chosen": 0.881443977355957,
1443
+ "logits/rejected": 0.7977533340454102,
1444
+ "logps/chosen": -461.211181640625,
1445
+ "logps/rejected": -386.59503173828125,
1446
+ "loss": 0.1258,
1447
+ "rewards/accuracies": 0.675000011920929,
1448
+ "rewards/chosen": 0.008033149875700474,
1449
+ "rewards/margins": 0.04333629831671715,
1450
+ "rewards/rejected": -0.03530315309762955,
1451
+ "step": 1020
1452
+ },
1453
+ {
1454
+ "epoch": 0.82,
1455
+ "learning_rate": 4.5713775416217884e-07,
1456
+ "logits/chosen": 0.9581049084663391,
1457
+ "logits/rejected": 0.8421887159347534,
1458
+ "logps/chosen": -473.08892822265625,
1459
+ "logps/rejected": -391.439208984375,
1460
+ "loss": 0.1228,
1461
+ "rewards/accuracies": 0.6625000238418579,
1462
+ "rewards/chosen": 0.005411247257143259,
1463
+ "rewards/margins": 0.045824769884347916,
1464
+ "rewards/rejected": -0.040413517504930496,
1465
+ "step": 1030
1466
+ },
1467
+ {
1468
+ "epoch": 0.83,
1469
+ "learning_rate": 4.1769689822475147e-07,
1470
+ "logits/chosen": 0.9711424112319946,
1471
+ "logits/rejected": 0.8476129770278931,
1472
+ "logps/chosen": -471.318359375,
1473
+ "logps/rejected": -386.77899169921875,
1474
+ "loss": 0.1284,
1475
+ "rewards/accuracies": 0.6625000238418579,
1476
+ "rewards/chosen": 0.004041860345751047,
1477
+ "rewards/margins": 0.04613048955798149,
1478
+ "rewards/rejected": -0.04208862781524658,
1479
+ "step": 1040
1480
+ },
1481
+ {
1482
+ "epoch": 0.84,
1483
+ "learning_rate": 3.798797596089351e-07,
1484
+ "logits/chosen": 0.9549845457077026,
1485
+ "logits/rejected": 0.7986112833023071,
1486
+ "logps/chosen": -479.4010314941406,
1487
+ "logps/rejected": -408.44085693359375,
1488
+ "loss": 0.13,
1489
+ "rewards/accuracies": 0.668749988079071,
1490
+ "rewards/chosen": 0.021130617707967758,
1491
+ "rewards/margins": 0.054322708398103714,
1492
+ "rewards/rejected": -0.033192090690135956,
1493
+ "step": 1050
1494
+ },
1495
+ {
1496
+ "epoch": 0.85,
1497
+ "learning_rate": 3.4371582698185636e-07,
1498
+ "logits/chosen": 0.8994715809822083,
1499
+ "logits/rejected": 0.7577309608459473,
1500
+ "logps/chosen": -479.24200439453125,
1501
+ "logps/rejected": -382.0733947753906,
1502
+ "loss": 0.1226,
1503
+ "rewards/accuracies": 0.6812499761581421,
1504
+ "rewards/chosen": 0.013279114849865437,
1505
+ "rewards/margins": 0.04814552143216133,
1506
+ "rewards/rejected": -0.03486640378832817,
1507
+ "step": 1060
1508
+ },
1509
+ {
1510
+ "epoch": 0.86,
1511
+ "learning_rate": 3.092332998903416e-07,
1512
+ "logits/chosen": 0.837253749370575,
1513
+ "logits/rejected": 0.7374977469444275,
1514
+ "logps/chosen": -454.10382080078125,
1515
+ "logps/rejected": -389.86248779296875,
1516
+ "loss": 0.1295,
1517
+ "rewards/accuracies": 0.643750011920929,
1518
+ "rewards/chosen": 0.0033629995305091143,
1519
+ "rewards/margins": 0.03582575172185898,
1520
+ "rewards/rejected": -0.03246275335550308,
1521
+ "step": 1070
1522
+ },
1523
+ {
1524
+ "epoch": 0.86,
1525
+ "learning_rate": 2.764590667717562e-07,
1526
+ "logits/chosen": 0.8877947926521301,
1527
+ "logits/rejected": 0.7853025794029236,
1528
+ "logps/chosen": -491.3389587402344,
1529
+ "logps/rejected": -418.23333740234375,
1530
+ "loss": 0.09,
1531
+ "rewards/accuracies": 0.7250000238418579,
1532
+ "rewards/chosen": 0.0025968037080019712,
1533
+ "rewards/margins": 0.060978036373853683,
1534
+ "rewards/rejected": -0.058381229639053345,
1535
+ "step": 1080
1536
+ },
1537
+ {
1538
+ "epoch": 0.87,
1539
+ "learning_rate": 2.454186839872158e-07,
1540
+ "logits/chosen": 0.9278301000595093,
1541
+ "logits/rejected": 0.7884716391563416,
1542
+ "logps/chosen": -478.32537841796875,
1543
+ "logps/rejected": -417.85992431640625,
1544
+ "loss": 0.1139,
1545
+ "rewards/accuracies": 0.737500011920929,
1546
+ "rewards/chosen": 0.015951160341501236,
1547
+ "rewards/margins": 0.06339136511087418,
1548
+ "rewards/rejected": -0.04744020476937294,
1549
+ "step": 1090
1550
+ },
1551
+ {
1552
+ "epoch": 0.88,
1553
+ "learning_rate": 2.1613635589349756e-07,
1554
+ "logits/chosen": 0.9455703496932983,
1555
+ "logits/rejected": 0.7855480909347534,
1556
+ "logps/chosen": -479.3771057128906,
1557
+ "logps/rejected": -387.804931640625,
1558
+ "loss": 0.1409,
1559
+ "rewards/accuracies": 0.668749988079071,
1560
+ "rewards/chosen": 0.005107102915644646,
1561
+ "rewards/margins": 0.043499238789081573,
1562
+ "rewards/rejected": -0.03839213401079178,
1563
+ "step": 1100
1564
+ },
1565
+ {
1566
+ "epoch": 0.89,
1567
+ "learning_rate": 1.8863491596921745e-07,
1568
+ "logits/chosen": 0.9270991086959839,
1569
+ "logits/rejected": 0.8039152026176453,
1570
+ "logps/chosen": -476.14898681640625,
1571
+ "logps/rejected": -371.51458740234375,
1572
+ "loss": 0.1168,
1573
+ "rewards/accuracies": 0.6937500238418579,
1574
+ "rewards/chosen": 0.011659547686576843,
1575
+ "rewards/margins": 0.05175407603383064,
1576
+ "rewards/rejected": -0.0400945320725441,
1577
+ "step": 1110
1578
+ },
1579
+ {
1580
+ "epoch": 0.9,
1581
+ "learning_rate": 1.629358090099639e-07,
1582
+ "logits/chosen": 0.9182494878768921,
1583
+ "logits/rejected": 0.8193320035934448,
1584
+ "logps/chosen": -479.65570068359375,
1585
+ "logps/rejected": -407.64569091796875,
1586
+ "loss": 0.114,
1587
+ "rewards/accuracies": 0.699999988079071,
1588
+ "rewards/chosen": 0.01620074175298214,
1589
+ "rewards/margins": 0.046776242554187775,
1590
+ "rewards/rejected": -0.030575498938560486,
1591
+ "step": 1120
1592
+ },
1593
+ {
1594
+ "epoch": 0.9,
1595
+ "learning_rate": 1.3905907440629752e-07,
1596
+ "logits/chosen": 0.8919647932052612,
1597
+ "logits/rejected": 0.8270837068557739,
1598
+ "logps/chosen": -475.1752014160156,
1599
+ "logps/rejected": -405.669921875,
1600
+ "loss": 0.1212,
1601
+ "rewards/accuracies": 0.5874999761581421,
1602
+ "rewards/chosen": 0.00893275998532772,
1603
+ "rewards/margins": 0.04200071096420288,
1604
+ "rewards/rejected": -0.03306794911623001,
1605
+ "step": 1130
1606
+ },
1607
+ {
1608
+ "epoch": 0.91,
1609
+ "learning_rate": 1.1702333051763271e-07,
1610
+ "logits/chosen": 0.8223358392715454,
1611
+ "logits/rejected": 0.6920078992843628,
1612
+ "logps/chosen": -467.2176818847656,
1613
+ "logps/rejected": -372.9118347167969,
1614
+ "loss": 0.1354,
1615
+ "rewards/accuracies": 0.6937500238418579,
1616
+ "rewards/chosen": 0.007803215179592371,
1617
+ "rewards/margins": 0.05598113685846329,
1618
+ "rewards/rejected": -0.04817792773246765,
1619
+ "step": 1140
1620
+ },
1621
+ {
1622
+ "epoch": 0.92,
1623
+ "learning_rate": 9.684576015420277e-08,
1624
+ "logits/chosen": 0.9437923431396484,
1625
+ "logits/rejected": 0.8367222547531128,
1626
+ "logps/chosen": -466.2740783691406,
1627
+ "logps/rejected": -400.8631286621094,
1628
+ "loss": 0.1299,
1629
+ "rewards/accuracies": 0.7124999761581421,
1630
+ "rewards/chosen": 0.01643536426126957,
1631
+ "rewards/margins": 0.049150340259075165,
1632
+ "rewards/rejected": -0.03271497040987015,
1633
+ "step": 1150
1634
+ },
1635
+ {
1636
+ "epoch": 0.93,
1637
+ "learning_rate": 7.854209717842231e-08,
1638
+ "logits/chosen": 0.8424699902534485,
1639
+ "logits/rejected": 0.7011204361915588,
1640
+ "logps/chosen": -468.2311096191406,
1641
+ "logps/rejected": -381.13287353515625,
1642
+ "loss": 0.1157,
1643
+ "rewards/accuracies": 0.737500011920929,
1644
+ "rewards/chosen": 0.016398664563894272,
1645
+ "rewards/margins": 0.06015818193554878,
1646
+ "rewards/rejected": -0.04375952482223511,
1647
+ "step": 1160
1648
+ },
1649
+ {
1650
+ "epoch": 0.94,
1651
+ "learning_rate": 6.212661423609184e-08,
1652
+ "logits/chosen": 0.8363115191459656,
1653
+ "logits/rejected": 0.783504843711853,
1654
+ "logps/chosen": -489.8089294433594,
1655
+ "logps/rejected": -389.686767578125,
1656
+ "loss": 0.1083,
1657
+ "rewards/accuracies": 0.6625000238418579,
1658
+ "rewards/chosen": 0.014369189739227295,
1659
+ "rewards/margins": 0.05540204793214798,
1660
+ "rewards/rejected": -0.041032858192920685,
1661
+ "step": 1170
1662
+ },
1663
+ {
1664
+ "epoch": 0.94,
1665
+ "learning_rate": 4.761211162702117e-08,
1666
+ "logits/chosen": 0.9592778086662292,
1667
+ "logits/rejected": 0.8283406496047974,
1668
+ "logps/chosen": -455.68017578125,
1669
+ "logps/rejected": -386.79205322265625,
1670
+ "loss": 0.1205,
1671
+ "rewards/accuracies": 0.706250011920929,
1672
+ "rewards/chosen": 0.014277097769081593,
1673
+ "rewards/margins": 0.05047346279025078,
1674
+ "rewards/rejected": -0.03619636222720146,
1675
+ "step": 1180
1676
+ },
1677
+ {
1678
+ "epoch": 0.95,
1679
+ "learning_rate": 3.5009907323737826e-08,
1680
+ "logits/chosen": 0.9002725481987,
1681
+ "logits/rejected": 0.7559747099876404,
1682
+ "logps/chosen": -469.6654357910156,
1683
+ "logps/rejected": -385.0262756347656,
1684
+ "loss": 0.1152,
1685
+ "rewards/accuracies": 0.675000011920929,
1686
+ "rewards/chosen": 0.006471751723438501,
1687
+ "rewards/margins": 0.04549337550997734,
1688
+ "rewards/rejected": -0.039021626114845276,
1689
+ "step": 1190
1690
+ },
1691
+ {
1692
+ "epoch": 0.96,
1693
+ "learning_rate": 2.4329828146074096e-08,
1694
+ "logits/chosen": 0.9445624351501465,
1695
+ "logits/rejected": 0.7747251391410828,
1696
+ "logps/chosen": -494.6180725097656,
1697
+ "logps/rejected": -381.2744445800781,
1698
+ "loss": 0.1219,
1699
+ "rewards/accuracies": 0.625,
1700
+ "rewards/chosen": 0.004516331944614649,
1701
+ "rewards/margins": 0.04709627851843834,
1702
+ "rewards/rejected": -0.04257994890213013,
1703
+ "step": 1200
1704
+ },
1705
+ {
1706
+ "epoch": 0.97,
1707
+ "learning_rate": 1.5580202098509078e-08,
1708
+ "logits/chosen": 0.9075485467910767,
1709
+ "logits/rejected": 0.791230320930481,
1710
+ "logps/chosen": -483.7403259277344,
1711
+ "logps/rejected": -374.10101318359375,
1712
+ "loss": 0.1168,
1713
+ "rewards/accuracies": 0.6875,
1714
+ "rewards/chosen": 0.005151915363967419,
1715
+ "rewards/margins": 0.056254614144563675,
1716
+ "rewards/rejected": -0.05110269784927368,
1717
+ "step": 1210
1718
+ },
1719
+ {
1720
+ "epoch": 0.98,
1721
+ "learning_rate": 8.767851876239075e-09,
1722
+ "logits/chosen": 0.9745057225227356,
1723
+ "logits/rejected": 0.8449977040290833,
1724
+ "logps/chosen": -461.8253479003906,
1725
+ "logps/rejected": -399.06011962890625,
1726
+ "loss": 0.1325,
1727
+ "rewards/accuracies": 0.699999988079071,
1728
+ "rewards/chosen": 0.013572081923484802,
1729
+ "rewards/margins": 0.054333366453647614,
1730
+ "rewards/rejected": -0.04076128453016281,
1731
+ "step": 1220
1732
+ },
1733
+ {
1734
+ "epoch": 0.98,
1735
+ "learning_rate": 3.8980895450474455e-09,
1736
+ "logits/chosen": 0.8922117948532104,
1737
+ "logits/rejected": 0.6962934732437134,
1738
+ "logps/chosen": -454.9315490722656,
1739
+ "logps/rejected": -386.56591796875,
1740
+ "loss": 0.1104,
1741
+ "rewards/accuracies": 0.7250000238418579,
1742
+ "rewards/chosen": 0.00724564865231514,
1743
+ "rewards/margins": 0.05632048100233078,
1744
+ "rewards/rejected": -0.04907483607530594,
1745
+ "step": 1230
1746
+ },
1747
+ {
1748
+ "epoch": 0.99,
1749
+ "learning_rate": 9.747123991141193e-10,
1750
+ "logits/chosen": 0.9504325985908508,
1751
+ "logits/rejected": 0.713564932346344,
1752
+ "logps/chosen": -479.7052307128906,
1753
+ "logps/rejected": -373.42852783203125,
1754
+ "loss": 0.1271,
1755
+ "rewards/accuracies": 0.706250011920929,
1756
+ "rewards/chosen": 0.010545340366661549,
1757
+ "rewards/margins": 0.05146101862192154,
1758
+ "rewards/rejected": -0.04091567546129227,
1759
+ "step": 1240
1760
+ },
1761
+ {
1762
+ "epoch": 1.0,
1763
+ "learning_rate": 0.0,
1764
+ "logits/chosen": 0.8859880566596985,
1765
+ "logits/rejected": 0.726716160774231,
1766
+ "logps/chosen": -467.9039001464844,
1767
+ "logps/rejected": -371.0747375488281,
1768
+ "loss": 0.1236,
1769
+ "rewards/accuracies": 0.706250011920929,
1770
+ "rewards/chosen": 0.0073653412982821465,
1771
+ "rewards/margins": 0.05689455196261406,
1772
+ "rewards/rejected": -0.04952920600771904,
1773
+ "step": 1250
1774
+ },
1775
+ {
1776
+ "epoch": 1.0,
1777
+ "step": 1250,
1778
+ "total_flos": 0.0,
1779
+ "train_loss": 0.12703038393855096,
1780
+ "train_runtime": 9089.2921,
1781
+ "train_samples_per_second": 2.2,
1782
+ "train_steps_per_second": 0.138
1783
+ }
1784
+ ],
1785
+ "logging_steps": 10,
1786
+ "max_steps": 1250,
1787
+ "num_input_tokens_seen": 0,
1788
+ "num_train_epochs": 1,
1789
+ "save_steps": 100,
1790
+ "total_flos": 0.0,
1791
+ "train_batch_size": 4,
1792
+ "trial_name": null,
1793
+ "trial_params": null
1794
+ }