--- library_name: transformers license: apache-2.0 base_model: Qwen/Qwen2-0.5B-Instruct tags: - trl - reward-trainer - generated_from_trainer metrics: - accuracy model-index: - name: Qwen2-0.5B-Reward results: [] --- # Qwen2-0.5B-Reward This model is a fine-tuned version of [Qwen/Qwen2-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2-0.5B-Instruct) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5139 - Accuracy: 0.723 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 64 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.6426 | 0.0516 | 50 | 0.6197 | 0.664 | | 0.5877 | 0.1032 | 100 | 0.6080 | 0.662 | | 0.5902 | 0.1548 | 150 | 0.5787 | 0.697 | | 0.5582 | 0.2064 | 200 | 0.5555 | 0.694 | | 0.5664 | 0.2580 | 250 | 0.5441 | 0.699 | | 0.5638 | 0.3096 | 300 | 0.5290 | 0.716 | | 0.5375 | 0.3612 | 350 | 0.5315 | 0.729 | | 0.5233 | 0.4128 | 400 | 0.5380 | 0.718 | | 0.5375 | 0.4644 | 450 | 0.5482 | 0.71 | | 0.5223 | 0.5160 | 500 | 0.5352 | 0.72 | | 0.5229 | 0.5676 | 550 | 0.5251 | 0.724 | | 0.5173 | 0.6192 | 600 | 0.5181 | 0.717 | | 0.5227 | 0.6708 | 650 | 0.5178 | 0.724 | | 0.5103 | 0.7224 | 700 | 0.5153 | 0.728 | | 0.5178 | 0.7740 | 750 | 0.5198 | 0.725 | | 0.5072 | 0.8256 | 800 | 0.5195 | 0.722 | | 0.5137 | 0.8772 | 850 | 0.5168 | 0.725 | | 0.4995 | 0.9288 | 900 | 0.5146 | 0.724 | | 0.4988 | 0.9804 | 950 | 0.5135 | 0.723 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.0+cu121 - Datasets 2.21.0 - Tokenizers 0.19.1