added readme
Browse files
README.md
ADDED
@@ -0,0 +1,136 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
## Setup Notes
|
2 |
+
|
3 |
+
For this model, a VM with 2 T4 GPUs was used.
|
4 |
+
|
5 |
+
To get the training to work on the 2 GPUs (utilize both GPUS simultaneously), the following command was used to initiate training.
|
6 |
+
|
7 |
+
WORLD_SIZE=2 CUDA_VISIBLE_DEVICES=0,1 torchrun --nproc_per_node=2 --master_port=1234 finetune.py --base_model 'decapoda-research/llama-7b-hf' --data_path 'wikisql' --output_dir './lora-alpaca' --num_epochs 1 --micro_batch_size 32
|
8 |
+
|
9 |
+
Note 1. Micro batch size was increased from the default 4 to 32.
|
10 |
+
|
11 |
+
Note 2. Output directory was initially lora-alpaca and then contents were moved to new folder when initializing git repository.
|
12 |
+
|
13 |
+
|
14 |
+
## Log
|
15 |
+
|
16 |
+
(sqltest) chrisdono4@deep-learning-duo-t4-4:~/alpaca-lora$ WORLD_SIZE=2 CUDA_VISIBLE_DEVICES=0,1 torchrun --nproc_per_node=2 --master_port=1234 finetune.py --base_model 'decapoda-research/lla
|
17 |
+
ma-7b-hf' --data_path 'wikisql' --output_dir './lora-alpaca' --micro_batch_size 32 --num_epochs 1
|
18 |
+
WARNING:torch.distributed.run:
|
19 |
+
*****************************************
|
20 |
+
Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your appli
|
21 |
+
cation as needed.
|
22 |
+
*****************************************
|
23 |
+
|
24 |
+
===================================BUG REPORT===================================
|
25 |
+
Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
|
26 |
+
================================================================================
|
27 |
+
|
28 |
+
===================================BUG REPORT===================================
|
29 |
+
Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
|
30 |
+
================================================================================
|
31 |
+
/opt/conda/envs/sqltest/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py:136: UserWarning: /opt/conda/envs/sqltest did not contain libcudart.so as expected! Searching further path
|
32 |
+
s...
|
33 |
+
warn(msg)
|
34 |
+
CUDA SETUP: CUDA runtime path found: /usr/local/cuda/lib64/libcudart.so
|
35 |
+
CUDA SETUP: Highest compute capability among GPUs detected: 7.5
|
36 |
+
CUDA SETUP: Detected CUDA version 113
|
37 |
+
CUDA SETUP: Loading binary /opt/conda/envs/sqltest/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cuda113.so...
|
38 |
+
/opt/conda/envs/sqltest/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py:136: UserWarning: /opt/conda/envs/sqltest did not contain libcudart.so as expected! Searching further path
|
39 |
+
s...
|
40 |
+
warn(msg)
|
41 |
+
CUDA SETUP: CUDA runtime path found: /usr/local/cuda/lib64/libcudart.so
|
42 |
+
CUDA SETUP: Highest compute capability among GPUs detected: 7.5
|
43 |
+
CUDA SETUP: Detected CUDA version 113
|
44 |
+
CUDA SETUP: Loading binary /opt/conda/envs/sqltest/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cuda113.so...
|
45 |
+
Training Alpaca-LoRA model with params:
|
46 |
+
base_model: decapoda-research/llama-7b-hf
|
47 |
+
data_path: wikisql
|
48 |
+
output_dir: ./lora-alpaca
|
49 |
+
batch_size: 128
|
50 |
+
micro_batch_size: 32
|
51 |
+
num_epochs: 1
|
52 |
+
learning_rate: 0.0003
|
53 |
+
cutoff_len: 256
|
54 |
+
val_set_size: 2000
|
55 |
+
lora_r: 8
|
56 |
+
lora_alpha: 16
|
57 |
+
lora_dropout: 0.05
|
58 |
+
lora_target_modules: ['q_proj', 'v_proj']
|
59 |
+
train_on_inputs: True
|
60 |
+
add_eos_token: False
|
61 |
+
group_by_length: False
|
62 |
+
wandb_project:
|
63 |
+
wandb_run_name:
|
64 |
+
wandb_watch:
|
65 |
+
wandb_log_model:
|
66 |
+
resume_from_checkpoint: False
|
67 |
+
prompt template: alpaca
|
68 |
+
|
69 |
+
Loading checkpoint shards: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 33/33 [01:24<00:00, 2.57s/it]
|
70 |
+
Loading checkpoint shards: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 33/33 [01:25<00:00, 2.58s/it]
|
71 |
+
The tokenizer class you load from this checkpoint is not the same type as the class this function is called from. It may result in unexpected tokenization.
|
72 |
+
The tokenizer class you load from this checkpoint is 'LLaMATokenizer'.
|
73 |
+
The class this function is called from is 'LlamaTokenizer'.
|
74 |
+
The tokenizer class you load from this checkpoint is not the same type as the class this function is called from. It may result in unexpected tokenization.
|
75 |
+
The tokenizer class you load from this checkpoint is 'LLaMATokenizer'.
|
76 |
+
The class this function is called from is 'LlamaTokenizer'.
|
77 |
+
Found cached dataset wikisql (/home/chrisdono4/.cache/huggingface/datasets/wikisql/default/0.1.0/7037bfe6a42b1ca2b6ac3ccacba5253b1825d31379e9cc626fc79a620977252d)
|
78 |
+
0%| | 0/3 [00:00<?, ?it/s]
|
79 |
+
Found cached dataset wikisql (/home/chrisdono4/.cache/huggingface/datasets/wikisql/default/0.1.0/7037bfe6a42b1ca2b6ac3ccacba5253b1825d31379e9cc626fc79a620977252d)
|
80 |
+
100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 3/3 [00:00<00:00, 39.74it/s]
|
81 |
+
100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 3/3 [00:00<00:00, 26.05it/s]
|
82 |
+
trainable params: 4194304 || all params: 6742609920 || trainable%: 0.06220594176090199
|
83 |
+
trainable params: 4194304 || all params: 6742609920 || trainable%: 0.06220594176090199
|
84 |
+
Loading cached split indices for dataset at /home/chrisdono4/.cache/huggingface/datasets/wikisql/default/0.1.0/7037bfe6a42b1ca2b6ac3ccacba5253b1825d31379e9cc626fc79a620977252d/cache-bccdadf40
|
85 |
+
48a2d5b.arrow and /home/chrisdono4/.cache/huggingface/datasets/wikisql/default/0.1.0/7037bfe6a42b1ca2b6ac3ccacba5253b1825d31379e9cc626fc79a620977252d/cache-f8d5ea283d842b5a.arrow
|
86 |
+
Loading cached split indices for dataset at /home/chrisdono4/.cache/huggingface/datasets/wikisql/default/0.1.0/7037bfe6a42b1ca2b6ac3ccacba5253b1825d31379e9cc626fc79a620977252d/cache-bccdadf40
|
87 |
+
48a2d5b.arrow and /home/chrisdono4/.cache/huggingface/datasets/wikisql/default/0.1.0/7037bfe6a42b1ca2b6ac3ccacba5253b1825d31379e9cc626fc79a620977252d/cache-f8d5ea283d842b5a.arrow
|
88 |
+
{'loss': 2.0163, 'learning_rate': 2.9999999999999997e-05, 'epoch': 0.02}
|
89 |
+
{'loss': 1.9284, 'learning_rate': 5.9999999999999995e-05, 'epoch': 0.05}
|
90 |
+
{'loss': 1.77, 'learning_rate': 8.999999999999999e-05, 'epoch': 0.07}
|
91 |
+
{'loss': 1.3452, 'learning_rate': 0.00011999999999999999, 'epoch': 0.09}
|
92 |
+
{'loss': 0.9243, 'learning_rate': 0.00015, 'epoch': 0.12}
|
93 |
+
{'loss': 0.8385, 'learning_rate': 0.00017999999999999998, 'epoch': 0.14}
|
94 |
+
{'loss': 0.7986, 'learning_rate': 0.00020999999999999998, 'epoch': 0.16}
|
95 |
+
{'loss': 0.7786, 'learning_rate': 0.00023999999999999998, 'epoch': 0.19}
|
96 |
+
{'loss': 0.75, 'learning_rate': 0.00027, 'epoch': 0.21}
|
97 |
+
{'loss': 0.7389, 'learning_rate': 0.0003, 'epoch': 0.24}
|
98 |
+
{'loss': 0.7248, 'learning_rate': 0.00029076923076923073, 'epoch': 0.26}
|
99 |
+
{'loss': 0.7199, 'learning_rate': 0.0002815384615384615, 'epoch': 0.28}
|
100 |
+
{'loss': 0.7159, 'learning_rate': 0.0002723076923076923, 'epoch': 0.31}
|
101 |
+
{'loss': 0.7029, 'learning_rate': 0.00026307692307692306, 'epoch': 0.33}
|
102 |
+
{'loss': 0.6851, 'learning_rate': 0.0002538461538461538, 'epoch': 0.35}
|
103 |
+
{'loss': 0.6935, 'learning_rate': 0.0002446153846153846, 'epoch': 0.38}
|
104 |
+
{'loss': 0.6737, 'learning_rate': 0.00023538461538461536, 'epoch': 0.4}
|
105 |
+
{'loss': 0.682, 'learning_rate': 0.00022615384615384614, 'epoch': 0.42}
|
106 |
+
{'loss': 0.667, 'learning_rate': 0.0002169230769230769, 'epoch': 0.45}
|
107 |
+
{'loss': 0.6731, 'learning_rate': 0.00020769230769230766, 'epoch': 0.47}
|
108 |
+
{'eval_loss': 0.6641973853111267, 'eval_runtime': 178.902, 'eval_samples_per_second': 11.179, 'eval_steps_per_second': 0.699, 'epoch': 0.47}
|
109 |
+
{'loss': 0.6631, 'learning_rate': 0.00019846153846153844, 'epoch': 0.49}
|
110 |
+
{'loss': 0.6652, 'learning_rate': 0.0001892307692307692, 'epoch': 0.52}
|
111 |
+
{'loss': 0.6591, 'learning_rate': 0.00017999999999999998, 'epoch': 0.54}
|
112 |
+
{'loss': 0.6605, 'learning_rate': 0.00017076923076923074, 'epoch': 0.56}
|
113 |
+
{'loss': 0.653, 'learning_rate': 0.00016153846153846153, 'epoch': 0.59}
|
114 |
+
{'loss': 0.6574, 'learning_rate': 0.00015230769230769228, 'epoch': 0.61}
|
115 |
+
{'loss': 0.6545, 'learning_rate': 0.00014307692307692307, 'epoch': 0.64}
|
116 |
+
{'loss': 0.6328, 'learning_rate': 0.00013384615384615385, 'epoch': 0.66}
|
117 |
+
{'loss': 0.6485, 'learning_rate': 0.0001246153846153846, 'epoch': 0.68}
|
118 |
+
{'loss': 0.6477, 'learning_rate': 0.00011538461538461538, 'epoch': 0.71}
|
119 |
+
{'loss': 0.639, 'learning_rate': 0.00010615384615384615, 'epoch': 0.73}
|
120 |
+
{'loss': 0.6384, 'learning_rate': 9.692307692307692e-05, 'epoch': 0.75}
|
121 |
+
{'loss': 0.6338, 'learning_rate': 8.76923076923077e-05, 'epoch': 0.78}
|
122 |
+
{'loss': 0.6394, 'learning_rate': 7.846153846153845e-05, 'epoch': 0.8}
|
123 |
+
82%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 348/425 [3:57:23<51:48, 40.37s/it]
|
124 |
+
{'loss': 0.6345, 'learning_rate': 6.923076923076922e-05, 'epoch': 0.82}
|
125 |
+
{'loss': 0.6424, 'learning_rate': 5.9999999999999995e-05, 'epoch': 0.85}
|
126 |
+
{'loss': 0.6271, 'learning_rate': 5.0769230769230766e-05, 'epoch': 0.87}
|
127 |
+
{'loss': 0.6267, 'learning_rate': 4.153846153846154e-05, 'epoch': 0.89}
|
128 |
+
{'loss': 0.642, 'learning_rate': 3.230769230769231e-05, 'epoch': 0.92}
|
129 |
+
{'loss': 0.6389, 'learning_rate': 2.3076923076923076e-05, 'epoch': 0.94}
|
130 |
+
{'eval_loss': 0.6302221417427063, 'eval_runtime': 177.453, 'eval_samples_per_second': 11.271, 'eval_steps_per_second': 0.704, 'epoch': 0.94}
|
131 |
+
{'loss': 0.6224, 'learning_rate': 1.3846153846153845e-05, 'epoch': 0.96}
|
132 |
+
{'loss': 0.6361, 'learning_rate': 4.615384615384615e-06, 'epoch': 0.99}
|
133 |
+
100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 425/425 [4:52:00<00:00, 36.53s/it]
|
134 |
+
{'train_runtime': 17520.706, 'train_samples_per_second': 3.102, 'train_steps_per_second': 0.024, 'train_loss': 0.7834248065948486, 'epoch': 1.0}
|
135 |
+
100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 425/425 [4:52:00<00:00, 41.22s/it]
|
136 |
+
|