Echo-IE-3B-v0.1
This model is a fine-tuned version of meta-llama/Llama-3.2-3B-Instruct on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.1664
- Rewards/chosen: -0.0370
- Rewards/rejected: -0.3280
- Rewards/accuracies: 1.0
- Rewards/margins: 0.2910
- Logps/rejected: -3.2803
- Logps/chosen: -0.3698
- Logits/rejected: 1.0091
- Logits/chosen: 0.9877
- Nll Loss: 0.1600
- Log Odds Ratio: -0.0425
- Log Odds Chosen: 4.2039
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- num_epochs: 10
Training results
Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | Nll Loss | Log Odds Ratio | Log Odds Chosen |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0.3327 | 1.0 | 44 | 0.3234 | -0.0664 | -0.1431 | 1.0 | 0.0767 | -1.4309 | -0.6641 | 0.4718 | 0.5480 | 0.2905 | -0.2860 | 1.2590 |
0.2004 | 2.0 | 88 | 0.2283 | -0.0488 | -0.2296 | 1.0 | 0.1809 | -2.2965 | -0.4877 | 0.6711 | 0.7162 | 0.2142 | -0.1075 | 2.7194 |
0.1661 | 3.0 | 132 | 0.1974 | -0.0423 | -0.2767 | 1.0 | 0.2344 | -2.7672 | -0.4230 | 0.8238 | 0.8408 | 0.1878 | -0.0679 | 3.4301 |
0.1227 | 4.0 | 176 | 0.1813 | -0.0392 | -0.2999 | 1.0 | 0.2607 | -2.9992 | -0.3919 | 0.8916 | 0.8935 | 0.1734 | -0.0541 | 3.7906 |
0.1434 | 5.0 | 220 | 0.1743 | -0.0380 | -0.3141 | 1.0 | 0.2762 | -3.1414 | -0.3799 | 0.9271 | 0.9167 | 0.1671 | -0.0484 | 4.0032 |
0.0994 | 6.0 | 264 | 0.1697 | -0.0373 | -0.3202 | 1.0 | 0.2828 | -3.2017 | -0.3732 | 0.9822 | 0.9679 | 0.1629 | -0.0453 | 4.0966 |
0.0896 | 7.0 | 308 | 0.1677 | -0.0371 | -0.3247 | 1.0 | 0.2876 | -3.2469 | -0.3706 | 0.9892 | 0.9698 | 0.1612 | -0.0436 | 4.1599 |
0.1047 | 8.0 | 352 | 0.1666 | -0.0370 | -0.3268 | 1.0 | 0.2899 | -3.2685 | -0.3695 | 1.0025 | 0.9822 | 0.1602 | -0.0429 | 4.1914 |
0.0979 | 9.0 | 396 | 0.1662 | -0.0369 | -0.3281 | 1.0 | 0.2911 | -3.2808 | -0.3694 | 1.0120 | 0.9910 | 0.1598 | -0.0426 | 4.2063 |
0.0986 | 10.0 | 440 | 0.1664 | -0.0370 | -0.3280 | 1.0 | 0.2910 | -3.2803 | -0.3698 | 1.0091 | 0.9877 | 0.1600 | -0.0425 | 4.2039 |
Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.0
- Tokenizers 0.19.1
- Downloads last month
- 43
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for TNE-AI/Echo-IE-3B-v0.1
Base model
meta-llama/Llama-3.2-3B-Instruct