zephyr-7b-uf-rlced-conifer-group-dpo-2e-alr-0.1
This model is a fine-tuned version of alignment-handbook/zephyr-7b-sft-full on the data/zephyr_uf_rlced_conifer_ref dataset. It achieves the following results on the evaluation set:
- Loss: 0.2391
- Rewards/chosen: -3.1721
- Rewards/rejected: -8.7679
- Rewards/accuracies: 0.8788
- Rewards/margins: 5.5958
- Logps/rejected: -1280.5232
- Logps/chosen: -709.6791
- Logits/rejected: 2.9862
- Logits/chosen: 0.4871
- Excess Loss: 0.0302
- Alpha 0 Uf: 0.2677
- Alpha 1 Rlced Conifer: 0.7323
- Rewards/chosen 1 Rlced Conifer: -3.3519
- Rewards/rejected 1 Rlced Conifer: -10.1355
- Rewards/accuracies 1 Rlced Conifer: 0.9088
- Rewards/margins 1 Rlced Conifer: 6.7836
- Logps/rejected 1 Rlced Conifer: -1461.0847
- Logps/chosen 1 Rlced Conifer: -758.7692
- Logits/rejected 1 Rlced Conifer: 2.9834
- Logits/chosen 1 Rlced Conifer: 0.2872
- Task Loss 1 Rlced Conifer: 0.1744
- Task Excess Loss 1 Rlced Conifer: 0.0378
- Rewards/chosen 0 Uf: -2.5137
- Rewards/rejected 0 Uf: -3.9578
- Rewards/accuracies 0 Uf: 0.7751
- Rewards/margins 0 Uf: 1.4442
- Logps/rejected 0 Uf: -637.3895
- Logps/chosen 0 Uf: -540.6270
- Logits/rejected 0 Uf: 3.2024
- Logits/chosen 0 Uf: 1.0821
- Task Loss 0 Uf: 0.5033
- Task Excess Loss 0 Uf: 0.0690
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2
Training results
Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | Excess Loss | Alpha 0 Uf | Alpha 1 Rlced Conifer | Rewards/chosen 1 Rlced Conifer | Rewards/rejected 1 Rlced Conifer | Rewards/accuracies 1 Rlced Conifer | Rewards/margins 1 Rlced Conifer | Logps/rejected 1 Rlced Conifer | Logps/chosen 1 Rlced Conifer | Logits/rejected 1 Rlced Conifer | Logits/chosen 1 Rlced Conifer | Task Loss 1 Rlced Conifer | Task Excess Loss 1 Rlced Conifer | Rewards/chosen 0 Uf | Rewards/rejected 0 Uf | Rewards/accuracies 0 Uf | Rewards/margins 0 Uf | Logps/rejected 0 Uf | Logps/chosen 0 Uf | Logits/rejected 0 Uf | Logits/chosen 0 Uf | Task Loss 0 Uf | Task Excess Loss 0 Uf |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0.1882 | 0.4997 | 360 | 0.2996 | -1.6886 | -4.1417 | 0.8609 | 2.4532 | -817.9084 | -561.3260 | 1.4584 | 0.3084 | 0.0858 | 0.8164 | 0.1836 | -1.7447 | -4.6283 | 0.8926 | 2.8836 | -910.3677 | -598.0539 | 1.2471 | 0.1366 | 0.2441 | 0.1077 | -1.4688 | -2.4264 | 0.7375 | 0.9576 | -484.2446 | -436.1386 | 2.3554 | 0.8449 | 0.5159 | 0.0745 |
0.1534 | 0.9993 | 720 | 0.2788 | -1.6895 | -4.6113 | 0.8656 | 2.9218 | -864.8680 | -561.4199 | 1.5835 | 0.1282 | 0.0703 | 0.8639 | 0.1361 | -1.7298 | -5.1653 | 0.8921 | 3.4355 | -964.0696 | -596.5645 | 1.3282 | -0.0899 | 0.2304 | 0.0945 | -1.5189 | -2.6316 | 0.7670 | 1.1128 | -504.7690 | -441.1461 | 2.6475 | 0.8112 | 0.4886 | 0.0496 |
0.0947 | 1.4990 | 1080 | 0.2421 | -2.6372 | -7.6503 | 0.8797 | 5.0132 | -1168.7697 | -656.1883 | 2.9592 | 0.5518 | 0.0336 | 0.2372 | 0.7628 | -2.7432 | -8.7916 | 0.9108 | 6.0484 | -1326.6932 | -697.9009 | 2.9155 | 0.3378 | 0.1806 | 0.0448 | -2.2397 | -3.6057 | 0.7721 | 1.3660 | -602.1759 | -513.2244 | 3.3160 | 1.1969 | 0.4985 | 0.0623 |
0.0894 | 1.9986 | 1440 | 0.2391 | -3.1721 | -8.7679 | 0.8788 | 5.5958 | -1280.5232 | -709.6791 | 2.9862 | 0.4871 | 0.0302 | 0.2677 | 0.7323 | -3.3519 | -10.1355 | 0.9088 | 6.7836 | -1461.0847 | -758.7692 | 2.9834 | 0.2872 | 0.1744 | 0.0378 | -2.5137 | -3.9578 | 0.7751 | 1.4442 | -637.3895 | -540.6270 | 3.2024 | 1.0821 | 0.5033 | 0.0690 |
Framework versions
- Transformers 4.44.2
- Pytorch 2.2.0a0+81ea7a4
- Datasets 2.21.0
- Tokenizers 0.19.1
- Downloads last month
- 1
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for NicholasCorrado/zephyr-7b-uf-rlced-conifer-group-dpo-2e-alr-0.1
Base model
mistralai/Mistral-7B-v0.1
Finetuned
alignment-handbook/zephyr-7b-sft-full