File size: 1,557 Bytes
4356dc0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
Trained using https://github.com/tloen/alpaca-lora
with removing the lines

```
    old_state_dict = model.state_dict
    model.state_dict = (
        lambda self, *_, **__: get_peft_model_state_dict(
            self, old_state_dict()
        )
    ).__get__(model, type(model))
```

causing problem.

base_model: yahma/llama-7b-hf
data_path: prognosis/medical_qa_alpaca
output_dir: ./lora-alpaca
batch_size: 128
micro_batch_size: 8
num_epochs: 5
learning_rate: 0.0003
cutoff_len: 512
val_set_size: 0.1
lora_r: 16
lora_alpha: 16
lora_dropout: 0.05
lora_target_modules: ['q_proj', 'k_proj', 'v_proj', 'o_proj']
train_on_inputs: True
add_eos_token: False
group_by_length: True
wandb_project: medical_alpaca_hf
wandb_run_name: run_3
wandb_watch: 
wandb_log_model: 
resume_from_checkpoint: False
prompt template: alpaca


### Command used 


Finetuning 
```
python finetune.py     --base_model 'yahma/llama-7b-hf'     --data_path 'prognosis/medical_qa_alpaca'     --output_dir './lora-alpaca'      --wandb_project 'medical_alpaca_hf'     --wandb_run_name 'run_3'     --lora_target_modules '[q_proj,k_proj,v_proj,o_proj]'     --num_epochs 5     --cutoff_len 512     --group_by_length     --val_set_size 0.1     --lora_r=16     --micro_batch_size=8
```

Generating

```
python generate.py \
    --load_8bit \
    --base_model 'yahma/llama-7b-hf' \
    --lora_weights 'alpaca-lora/lora-alpaca' \
    --share_gradio
 ```

git lfs

```   
curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | sudo bash
sudo apt-get install git-lfs
```