yakazimir commited on
Commit
4a90d6f
1 Parent(s): 22da528

Model save

Browse files
Files changed (4) hide show
  1. README.md +26 -32
  2. all_results.json +4 -18
  3. train_results.json +4 -4
  4. trainer_state.json +0 -0
README.md CHANGED
@@ -3,15 +3,9 @@ library_name: transformers
3
  license: other
4
  base_model: trl-lib/qwen1.5-0.5b-sft
5
  tags:
6
- - alignment-handbook
7
  - trl
8
  - simpo
9
  - generated_from_trainer
10
- - trl
11
- - simpo
12
- - generated_from_trainer
13
- datasets:
14
- - yakazimir/ultrafeedback_binarized
15
  model-index:
16
  - name: qwen_cpo_entropy_0_3
17
  results: []
@@ -22,18 +16,18 @@ should probably proofread and complete it, then remove this comment. -->
22
 
23
  # qwen_cpo_entropy_0_3
24
 
25
- This model is a fine-tuned version of [trl-lib/qwen1.5-0.5b-sft](https://huggingface.co/trl-lib/qwen1.5-0.5b-sft) on the yakazimir/ultrafeedback_binarized dataset.
26
  It achieves the following results on the evaluation set:
27
- - Loss: 1.0436
28
- - Sft Loss: 1.4856
29
- - Rewards/chosen: -1.5353
30
- - Rewards/rejected: -2.2017
31
- - Rewards/accuracies: 0.6476
32
- - Rewards/margins: 0.6664
33
- - Logps/rejected: -2.2017
34
- - Logps/chosen: -1.5353
35
- - Logits/rejected: -0.4248
36
- - Logits/chosen: -0.4735
37
 
38
  ## Model description
39
 
@@ -52,7 +46,7 @@ More information needed
52
  ### Training hyperparameters
53
 
54
  The following hyperparameters were used during training:
55
- - learning_rate: 3e-06
56
  - train_batch_size: 2
57
  - eval_batch_size: 4
58
  - seed: 42
@@ -68,20 +62,20 @@ The following hyperparameters were used during training:
68
 
69
  | Training Loss | Epoch | Step | Validation Loss | Sft Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
70
  |:-------------:|:------:|:----:|:---------------:|:--------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
71
- | 1.0742 | 0.2141 | 400 | 1.0823 | 1.3696 | -1.3461 | -1.5309 | 0.5779 | 0.1848 | -1.5309 | -1.3461 | 0.3537 | 0.2666 |
72
- | 1.0513 | 0.4282 | 800 | 1.0583 | 1.4056 | -1.4223 | -1.7731 | 0.6039 | 0.3508 | -1.7731 | -1.4223 | 0.2684 | 0.1774 |
73
- | 1.0763 | 0.6422 | 1200 | 1.0495 | 1.3954 | -1.3876 | -1.7498 | 0.6046 | 0.3622 | -1.7498 | -1.3876 | 0.4388 | 0.3313 |
74
- | 1.0436 | 0.8563 | 1600 | 1.0524 | 1.3880 | -1.3712 | -1.6780 | 0.6091 | 0.3068 | -1.6780 | -1.3712 | 0.5673 | 0.4476 |
75
- | 1.0569 | 1.0704 | 2000 | 1.0427 | 1.4160 | -1.4133 | -1.8531 | 0.6298 | 0.4398 | -1.8531 | -1.4133 | 0.0092 | -0.0695 |
76
- | 0.9655 | 1.2845 | 2400 | 1.0376 | 1.4169 | -1.4205 | -1.9133 | 0.6358 | 0.4928 | -1.9133 | -1.4205 | -0.2668 | -0.3251 |
77
- | 1.0333 | 1.4986 | 2800 | 1.0458 | 1.3973 | -1.3793 | -1.7731 | 0.6128 | 0.3937 | -1.7731 | -1.3793 | -0.0046 | -0.0841 |
78
- | 0.9824 | 1.7127 | 3200 | 1.0347 | 1.4063 | -1.3916 | -1.8345 | 0.6283 | 0.4429 | -1.8345 | -1.3916 | -0.2377 | -0.2977 |
79
- | 0.9557 | 1.9267 | 3600 | 1.0309 | 1.4319 | -1.4343 | -1.9644 | 0.6454 | 0.5301 | -1.9644 | -1.4343 | -0.2903 | -0.3472 |
80
- | 0.8559 | 2.1408 | 4000 | 1.0420 | 1.4888 | -1.5362 | -2.1635 | 0.6550 | 0.6272 | -2.1635 | -1.5362 | -0.1761 | -0.2434 |
81
- | 0.8788 | 2.3549 | 4400 | 1.0414 | 1.4794 | -1.5273 | -2.1771 | 0.6469 | 0.6498 | -2.1771 | -1.5273 | -0.2963 | -0.3552 |
82
- | 0.8747 | 2.5690 | 4800 | 1.0419 | 1.4756 | -1.5253 | -2.1757 | 0.6454 | 0.6504 | -2.1757 | -1.5253 | -0.3952 | -0.4464 |
83
- | 0.8717 | 2.7831 | 5200 | 1.0438 | 1.4855 | -1.5370 | -2.2063 | 0.6469 | 0.6693 | -2.2063 | -1.5370 | -0.4497 | -0.4964 |
84
- | 0.8816 | 2.9972 | 5600 | 1.0436 | 1.4855 | -1.5353 | -2.2017 | 0.6476 | 0.6664 | -2.2017 | -1.5353 | -0.4248 | -0.4735 |
85
 
86
 
87
  ### Framework versions
 
3
  license: other
4
  base_model: trl-lib/qwen1.5-0.5b-sft
5
  tags:
 
6
  - trl
7
  - simpo
8
  - generated_from_trainer
 
 
 
 
 
9
  model-index:
10
  - name: qwen_cpo_entropy_0_3
11
  results: []
 
16
 
17
  # qwen_cpo_entropy_0_3
18
 
19
+ This model is a fine-tuned version of [trl-lib/qwen1.5-0.5b-sft](https://huggingface.co/trl-lib/qwen1.5-0.5b-sft) on an unknown dataset.
20
  It achieves the following results on the evaluation set:
21
+ - Loss: 1.0416
22
+ - Sft Loss: 1.4031
23
+ - Rewards/chosen: -1.3990
24
+ - Rewards/rejected: -1.8440
25
+ - Rewards/accuracies: 0.6157
26
+ - Rewards/margins: 0.4450
27
+ - Logps/rejected: -1.8440
28
+ - Logps/chosen: -1.3990
29
+ - Logits/rejected: 0.2187
30
+ - Logits/chosen: 0.1269
31
 
32
  ## Model description
33
 
 
46
  ### Training hyperparameters
47
 
48
  The following hyperparameters were used during training:
49
+ - learning_rate: 1e-06
50
  - train_batch_size: 2
51
  - eval_batch_size: 4
52
  - seed: 42
 
62
 
63
  | Training Loss | Epoch | Step | Validation Loss | Sft Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
64
  |:-------------:|:------:|:----:|:---------------:|:--------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
65
+ | 1.09 | 0.2141 | 400 | 1.1010 | 1.3681 | -1.3477 | -1.4855 | 0.5586 | 0.1378 | -1.4855 | -1.3477 | 0.3207 | 0.2350 |
66
+ | 1.0764 | 0.4282 | 800 | 1.0739 | 1.3759 | -1.3603 | -1.5873 | 0.5823 | 0.2270 | -1.5873 | -1.3603 | 0.3806 | 0.2884 |
67
+ | 1.077 | 0.6422 | 1200 | 1.0591 | 1.3822 | -1.3685 | -1.6704 | 0.5935 | 0.3019 | -1.6704 | -1.3685 | 0.3589 | 0.2649 |
68
+ | 1.0489 | 0.8563 | 1600 | 1.0555 | 1.3767 | -1.3518 | -1.6477 | 0.5905 | 0.2959 | -1.6477 | -1.3518 | 0.4297 | 0.3293 |
69
+ | 1.1366 | 1.0704 | 2000 | 1.0496 | 1.3798 | -1.3555 | -1.7040 | 0.5987 | 0.3484 | -1.7040 | -1.3555 | 0.3416 | 0.2453 |
70
+ | 1.0133 | 1.2845 | 2400 | 1.0461 | 1.3864 | -1.3639 | -1.7321 | 0.6053 | 0.3682 | -1.7321 | -1.3639 | 0.3701 | 0.2708 |
71
+ | 1.1144 | 1.4986 | 2800 | 1.0443 | 1.3887 | -1.3652 | -1.7447 | 0.6105 | 0.3794 | -1.7447 | -1.3652 | 0.2150 | 0.1278 |
72
+ | 1.0196 | 1.7127 | 3200 | 1.0449 | 1.3841 | -1.3615 | -1.7338 | 0.6142 | 0.3723 | -1.7338 | -1.3615 | 0.1872 | 0.1007 |
73
+ | 1.0023 | 1.9267 | 3600 | 1.0405 | 1.3927 | -1.3767 | -1.7830 | 0.6120 | 0.4063 | -1.7830 | -1.3767 | 0.2211 | 0.1322 |
74
+ | 0.9654 | 2.1408 | 4000 | 1.0418 | 1.3967 | -1.3910 | -1.8183 | 0.6180 | 0.4273 | -1.8183 | -1.3910 | 0.2405 | 0.1482 |
75
+ | 0.9676 | 2.3549 | 4400 | 1.0418 | 1.4054 | -1.4061 | -1.8540 | 0.6231 | 0.4479 | -1.8540 | -1.4061 | 0.2064 | 0.1158 |
76
+ | 0.9789 | 2.5690 | 4800 | 1.0420 | 1.4009 | -1.3974 | -1.8380 | 0.6142 | 0.4406 | -1.8380 | -1.3974 | 0.1887 | 0.0996 |
77
+ | 1.0003 | 2.7831 | 5200 | 1.0413 | 1.4027 | -1.3986 | -1.8438 | 0.6187 | 0.4452 | -1.8438 | -1.3986 | 0.2046 | 0.1137 |
78
+ | 0.9909 | 2.9972 | 5600 | 1.0416 | 1.4031 | -1.3990 | -1.8440 | 0.6157 | 0.4450 | -1.8440 | -1.3990 | 0.2187 | 0.1269 |
79
 
80
 
81
  ### Framework versions
all_results.json CHANGED
@@ -1,23 +1,9 @@
1
  {
2
  "epoch": 2.999297541394882,
3
- "eval_logits/chosen": -0.47352659702301025,
4
- "eval_logits/rejected": -0.42484986782073975,
5
- "eval_logps/chosen": -1.535322666168213,
6
- "eval_logps/rejected": -2.2016897201538086,
7
- "eval_loss": 1.043636679649353,
8
- "eval_rewards/accuracies": 0.6476261019706726,
9
- "eval_rewards/chosen": -1.535322666168213,
10
- "eval_rewards/margins": 0.6663669943809509,
11
- "eval_rewards/rejected": -2.2016897201538086,
12
- "eval_runtime": 42.6866,
13
- "eval_samples": 1345,
14
- "eval_samples_per_second": 31.509,
15
- "eval_sft_loss": 1.4855674505233765,
16
- "eval_steps_per_second": 7.895,
17
  "total_flos": 0.0,
18
- "train_loss": 0.9803723535479859,
19
- "train_runtime": 32947.1599,
20
  "train_samples": 59790,
21
- "train_samples_per_second": 5.444,
22
- "train_steps_per_second": 0.17
23
  }
 
1
  {
2
  "epoch": 2.999297541394882,
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  "total_flos": 0.0,
4
+ "train_loss": 1.0402103091545567,
5
+ "train_runtime": 33797.3752,
6
  "train_samples": 59790,
7
+ "train_samples_per_second": 5.307,
8
+ "train_steps_per_second": 0.166
9
  }
train_results.json CHANGED
@@ -1,9 +1,9 @@
1
  {
2
  "epoch": 2.999297541394882,
3
  "total_flos": 0.0,
4
- "train_loss": 0.9803723535479859,
5
- "train_runtime": 32947.1599,
6
  "train_samples": 59790,
7
- "train_samples_per_second": 5.444,
8
- "train_steps_per_second": 0.17
9
  }
 
1
  {
2
  "epoch": 2.999297541394882,
3
  "total_flos": 0.0,
4
+ "train_loss": 1.0402103091545567,
5
+ "train_runtime": 33797.3752,
6
  "train_samples": 59790,
7
+ "train_samples_per_second": 5.307,
8
+ "train_steps_per_second": 0.166
9
  }
trainer_state.json CHANGED
The diff for this file is too large to render. See raw diff