End of training
Browse files- README.md +5 -5
- adapter_model.bin +1 -1
README.md
CHANGED
@@ -71,7 +71,7 @@ pad_to_sequence_len: true
|
|
71 |
resume_from_checkpoint: null
|
72 |
sample_packing: true
|
73 |
saves_per_epoch: 1
|
74 |
-
seed:
|
75 |
sequence_len: 4096
|
76 |
special_tokens: null
|
77 |
strict: false
|
@@ -95,12 +95,12 @@ xformers_attention: null
|
|
95 |
|
96 |
</details><br>
|
97 |
|
98 |
-
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/fatcat87-taopanda/subnet56/runs/
|
99 |
# taopanda-1_modal-workspace-taopanda
|
100 |
|
101 |
This model is a fine-tuned version of [unsloth/Qwen2-0.5B](https://huggingface.co/unsloth/Qwen2-0.5B) on the None dataset.
|
102 |
It achieves the following results on the evaluation set:
|
103 |
-
- Loss: 1.
|
104 |
|
105 |
## Model description
|
106 |
|
@@ -122,7 +122,7 @@ The following hyperparameters were used during training:
|
|
122 |
- learning_rate: 0.0002
|
123 |
- train_batch_size: 2
|
124 |
- eval_batch_size: 2
|
125 |
-
- seed:
|
126 |
- distributed_type: multi-GPU
|
127 |
- num_devices: 4
|
128 |
- gradient_accumulation_steps: 4
|
@@ -136,7 +136,7 @@ The following hyperparameters were used during training:
|
|
136 |
|
137 |
| Training Loss | Epoch | Step | Validation Loss |
|
138 |
|:-------------:|:-----:|:----:|:---------------:|
|
139 |
-
| 1.
|
140 |
|
141 |
|
142 |
### Framework versions
|
|
|
71 |
resume_from_checkpoint: null
|
72 |
sample_packing: true
|
73 |
saves_per_epoch: 1
|
74 |
+
seed: 88046
|
75 |
sequence_len: 4096
|
76 |
special_tokens: null
|
77 |
strict: false
|
|
|
95 |
|
96 |
</details><br>
|
97 |
|
98 |
+
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/fatcat87-taopanda/subnet56/runs/ci0wikh3)
|
99 |
# taopanda-1_modal-workspace-taopanda
|
100 |
|
101 |
This model is a fine-tuned version of [unsloth/Qwen2-0.5B](https://huggingface.co/unsloth/Qwen2-0.5B) on the None dataset.
|
102 |
It achieves the following results on the evaluation set:
|
103 |
+
- Loss: 1.9134
|
104 |
|
105 |
## Model description
|
106 |
|
|
|
122 |
- learning_rate: 0.0002
|
123 |
- train_batch_size: 2
|
124 |
- eval_batch_size: 2
|
125 |
+
- seed: 88046
|
126 |
- distributed_type: multi-GPU
|
127 |
- num_devices: 4
|
128 |
- gradient_accumulation_steps: 4
|
|
|
136 |
|
137 |
| Training Loss | Epoch | Step | Validation Loss |
|
138 |
|:-------------:|:-----:|:----:|:---------------:|
|
139 |
+
| 1.4515 | 1.0 | 1 | 1.9134 |
|
140 |
|
141 |
|
142 |
### Framework versions
|
adapter_model.bin
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 70506570
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:20a662c9fce155a0469451a886e1020e1593a8ef245551007c47642e701fb72f
|
3 |
size 70506570
|