RichardErkhov commited on
Commit
d95d1c6
·
verified ·
1 Parent(s): 4167ba5

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +175 -0
README.md ADDED
@@ -0,0 +1,175 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ ultrachat-evolcode-phi-2-sft-chatml - bnb 8bits
11
+ - Model creator: https://huggingface.co/AlekseyKorshuk/
12
+ - Original model: https://huggingface.co/AlekseyKorshuk/ultrachat-evolcode-phi-2-sft-chatml/
13
+
14
+
15
+
16
+
17
+ Original model description:
18
+ ---
19
+ license: mit
20
+ base_model: AlekseyKorshuk/ultrachat-phi-2-sft-chatml
21
+ tags:
22
+ - axolotl
23
+ - generated_from_trainer
24
+ model-index:
25
+ - name: ultrachat-evolcode-phi-2-sft-chatml
26
+ results: []
27
+ ---
28
+
29
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
30
+ should probably proofread and complete it, then remove this comment. -->
31
+
32
+ [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
33
+ <details><summary>See axolotl config</summary>
34
+
35
+ axolotl version: `0.4.0`
36
+ ```yaml
37
+ base_model: AlekseyKorshuk/ultrachat-phi-2-sft-chatml
38
+ model_type: AutoModelForCausalLM
39
+ tokenizer_type: AutoTokenizer
40
+ trust_remote_code: true
41
+
42
+ hub_model_id: AlekseyKorshuk/ultrachat-evolcode-phi-2-sft-chatml
43
+ hub_strategy: every_save
44
+
45
+ load_in_8bit: false
46
+ load_in_4bit: false
47
+ strict: false
48
+
49
+ datasets:
50
+ - path: AlekseyKorshuk/evol-codealpaca-v1-sft
51
+ type: sharegpt
52
+ conversation: chatml
53
+
54
+ dataset_prepared_path:
55
+ val_set_size: 0
56
+ output_dir: ./output
57
+
58
+ sequence_len: 2048
59
+ sample_packing: false
60
+ pad_to_sequence_len:
61
+
62
+ lora_r:
63
+ lora_alpha:
64
+ lora_dropout:
65
+ lora_target_modules:
66
+ lora_target_linear:
67
+ lora_fan_in_fan_out:
68
+
69
+ wandb_project: ui-thesis
70
+ wandb_entity:
71
+ wandb_watch:
72
+ wandb_name: ultrachat-evolcode-phi-2-sft-chatml
73
+ wandb_log_model:
74
+
75
+ gradient_accumulation_steps: 2
76
+ micro_batch_size: 16
77
+ num_epochs: 1
78
+ optimizer: paged_adamw_8bit
79
+ adam_beta1: 0.9
80
+ adam_beta2: 0.95
81
+ max_grad_norm: 1.0
82
+ adam_epsilon: 0.00001
83
+ lr_scheduler: cosine
84
+ cosine_min_lr_ratio: 0.1
85
+ learning_rate: 2e-5
86
+ warmup_ratio: 0.1
87
+ weight_decay: 0.1
88
+
89
+ train_on_inputs: false
90
+ group_by_length: false
91
+ bf16: true
92
+ fp16: false
93
+ tf32: true
94
+
95
+ #bf16: false
96
+ #fp16: false
97
+ #tf32: false
98
+ #float16: true
99
+
100
+ gradient_checkpointing: true
101
+ early_stopping_patience:
102
+ resume_from_checkpoint:
103
+ local_rank:
104
+ logging_steps: 1
105
+ xformers_attention:
106
+ flash_attention: true
107
+
108
+
109
+ evals_per_epoch: 0
110
+ eval_table_size: 8 # Approximate number of predictions sent to wandb depending on batch size. Enabled above 0. Default is 0
111
+ eval_table_max_new_tokens: 768 # Total number of tokens generated for predictions sent to wandb. Default is 128
112
+ eval_sample_packing: false
113
+
114
+ chat_template: chatml
115
+ saves_per_epoch: 5
116
+ save_total_limit: 1
117
+ seed: 42
118
+ debug:
119
+ deepspeed:
120
+
121
+ fsdp:
122
+ fsdp_config:
123
+ resize_token_embeddings_to_32x: true
124
+
125
+ ```
126
+
127
+ </details><br>
128
+
129
+ # ultrachat-evolcode-phi-2-sft-chatml
130
+
131
+ This model is a fine-tuned version of [AlekseyKorshuk/ultrachat-phi-2-sft-chatml](https://huggingface.co/AlekseyKorshuk/ultrachat-phi-2-sft-chatml) on the None dataset.
132
+
133
+ ## Model description
134
+
135
+ More information needed
136
+
137
+ ## Intended uses & limitations
138
+
139
+ More information needed
140
+
141
+ ## Training and evaluation data
142
+
143
+ More information needed
144
+
145
+ ## Training procedure
146
+
147
+ ### Training hyperparameters
148
+
149
+ The following hyperparameters were used during training:
150
+ - learning_rate: 2e-05
151
+ - train_batch_size: 16
152
+ - eval_batch_size: 16
153
+ - seed: 42
154
+ - distributed_type: multi-GPU
155
+ - num_devices: 4
156
+ - gradient_accumulation_steps: 2
157
+ - total_train_batch_size: 128
158
+ - total_eval_batch_size: 64
159
+ - optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-05
160
+ - lr_scheduler_type: cosine
161
+ - lr_scheduler_warmup_steps: 7
162
+ - num_epochs: 1
163
+
164
+ ### Training results
165
+
166
+
167
+
168
+ ### Framework versions
169
+
170
+ - Transformers 4.37.0
171
+ - Pytorch 2.1.2+cu118
172
+ - Datasets 2.16.1
173
+ - Tokenizers 0.15.0
174
+
175
+