Update README.md
Browse files
README.md
CHANGED
@@ -93,14 +93,9 @@ Please click [here](https://forms.gle/jk851eBVbX1m5TAv5) to apply for the offici
|
|
93 |
|-- tokenizer_checklist.chk
|
94 |
```
|
95 |
|
96 |
-
You can use the following command to download the `
|
97 |
```shell
|
98 |
-
python tools/download.py --download_path ./
|
99 |
-
```
|
100 |
-
|
101 |
-
If you want to download the diff weights in the fp16 format, please use the following command (assuming it is saved in the `./zhixi-diff-fp16` folder):
|
102 |
-
```shell
|
103 |
-
python tools/download.py --download_path ./zhixi-diff-fp16 --only_base --fp16
|
104 |
```
|
105 |
|
106 |
> :exclamation:Noted. If the download is interrupted, please repeat the command mentioned above. HuggingFace provides the functionality of resumable downloads, allowing you to resume the download from where it was interrupted.
|
@@ -115,20 +110,13 @@ python convert_llama_weights_to_hf.py --input_dir ./ --model_size 13B --output_d
|
|
115 |
|
116 |
**3. Restore ZhiXi 13B**
|
117 |
|
118 |
-
Use the script we provided, located at `./tools/weight_diff.py`, execute the following command, and you will get the complete `
|
119 |
-
|
120 |
-
```shell
|
121 |
-
python tools/weight_diff.py recover --path_raw ./converted --path_diff ./zhixi-diff --path_tuned ./zhixi
|
122 |
-
```
|
123 |
-
|
124 |
-
The final complete ZhiXi weights are saved in the `./zhixi` folder.
|
125 |
|
126 |
-
If you have downloaded the diff weights version in fp16 format, you can obtain them using the following command. Please note that there might be slight differences compared to the weights obtained in fp32 format:
|
127 |
```shell
|
128 |
-
python tools/weight_diff.py recover --path_raw ./converted --path_diff ./
|
129 |
```
|
130 |
|
131 |
-
|
132 |
|
133 |
|
134 |
<h3 id="1-3">1.3 Instruction tuning LoRA weight acquisition</h3>
|
@@ -136,10 +124,10 @@ python tools/weight_diff.py recover --path_raw ./converted --path_diff ./zhixi-d
|
|
136 |
Use the script file we provided, located at `./tools/download.py`, execute the following command to get the LoRA weight (assuming the saved path is located at `./LoRA`):
|
137 |
|
138 |
```shell
|
139 |
-
python tools/download.py --download_path ./
|
140 |
```
|
141 |
|
142 |
-
The final complete weights are saved in the `./
|
143 |
|
144 |
|
145 |
|
@@ -152,7 +140,7 @@ The final complete weights are saved in the `./LoRA` folder.
|
|
152 |
1. If you want to reproduce the results in section `1.1`(**pretraining cases**), please run the following command (assuming that the complete pre-training weights of `ZhiXi` have been obtained according to the steps in section `2.2`, and the ZhiXi weight is saved in the `./zhixi` folder):
|
153 |
|
154 |
```shell
|
155 |
-
python examples/generate_finetune.py --base_model ./
|
156 |
```
|
157 |
|
158 |
The result in section `1.1` can be obtained.
|
@@ -160,7 +148,7 @@ The final complete weights are saved in the `./LoRA` folder.
|
|
160 |
2. If you want to reproduce the results in section `1.2`(**information extraction cases**), please run the following command (assuming that the LoRA weights of `ZhiXi` have been obtained according to the steps in section `2.3`, and the LoRA weights is saved in the `./lora` folder):
|
161 |
|
162 |
```shell
|
163 |
-
python examples/generate_lora.py --load_8bit --base_model ./
|
164 |
```
|
165 |
|
166 |
The result in section `1.2` can be obtained.
|
@@ -168,7 +156,7 @@ The final complete weights are saved in the `./LoRA` folder.
|
|
168 |
3. If you want to reproduce the results in section `1.3`(**general ablities cases**), please run the following command (assuming that the LoRA weights of `ZhiXi` have been obtained according to the steps in section `2.3`, and the LoRA weights is saved in the `./lora` folder):
|
169 |
|
170 |
```shell
|
171 |
-
python examples/generate_lora.py --load_8bit --base_model ./
|
172 |
```
|
173 |
|
174 |
The result in section `1.3` can be obtained.
|
@@ -182,7 +170,7 @@ We offer two methods: the first one is **command-line interaction**, and the sec
|
|
182 |
1. Use the following command to enter **command-line interaction**:
|
183 |
|
184 |
```shell
|
185 |
-
python examples/generate_finetune.py --base_model ./
|
186 |
```
|
187 |
|
188 |
The disadvantage is the inability to dynamically change decoding parameters.
|
@@ -190,7 +178,7 @@ We offer two methods: the first one is **command-line interaction**, and the sec
|
|
190 |
2. Use the following command to enter **web-based interaction**:
|
191 |
|
192 |
```shell
|
193 |
-
python examples/generate_finetune_web.py --base_model ./
|
194 |
```
|
195 |
Here is a screenshot of the web-based interaction:
|
196 |
<p align="center" width="100%">
|
@@ -203,7 +191,7 @@ We offer two methods: the first one is **command-line interaction**, and the sec
|
|
203 |
Here, we provide a web-based interaction method. Use the following command to access the web:
|
204 |
|
205 |
```shell
|
206 |
-
python examples/generate_lora_web.py --base_model ./
|
207 |
```
|
208 |
|
209 |
Here is a screenshot of the web-based interaction:
|
|
|
93 |
|-- tokenizer_checklist.chk
|
94 |
```
|
95 |
|
96 |
+
You can use the following command to download the `KnowLM-13B-Diff` file (assuming it is saved in the `./knowlm-diff` folder):
|
97 |
```shell
|
98 |
+
python tools/download.py --specify --repo_name openkg/knowlm-13b-diff --download_path ./knowlm-diff
|
|
|
|
|
|
|
|
|
|
|
99 |
```
|
100 |
|
101 |
> :exclamation:Noted. If the download is interrupted, please repeat the command mentioned above. HuggingFace provides the functionality of resumable downloads, allowing you to resume the download from where it was interrupted.
|
|
|
110 |
|
111 |
**3. Restore ZhiXi 13B**
|
112 |
|
113 |
+
Use the script we provided, located at `./tools/weight_diff.py`, execute the following command, and you will get the complete `KnowLM` weight:
|
|
|
|
|
|
|
|
|
|
|
|
|
114 |
|
|
|
115 |
```shell
|
116 |
+
python tools/weight_diff.py recover --path_raw ./converted --path_diff ./knowlm-diff --path_tuned ./knowlm
|
117 |
```
|
118 |
|
119 |
+
The final complete KnowLM weights are saved in the `./knowlm` folder.
|
120 |
|
121 |
|
122 |
<h3 id="1-3">1.3 Instruction tuning LoRA weight acquisition</h3>
|
|
|
124 |
Use the script file we provided, located at `./tools/download.py`, execute the following command to get the LoRA weight (assuming the saved path is located at `./LoRA`):
|
125 |
|
126 |
```shell
|
127 |
+
python tools/download.py --download_path ./lora --specify --repo_name openkg/knowlm-13b-lora
|
128 |
```
|
129 |
|
130 |
+
The final complete weights are saved in the `./lora` folder.
|
131 |
|
132 |
|
133 |
|
|
|
140 |
1. If you want to reproduce the results in section `1.1`(**pretraining cases**), please run the following command (assuming that the complete pre-training weights of `ZhiXi` have been obtained according to the steps in section `2.2`, and the ZhiXi weight is saved in the `./zhixi` folder):
|
141 |
|
142 |
```shell
|
143 |
+
python examples/generate_finetune.py --base_model ./knowlm
|
144 |
```
|
145 |
|
146 |
The result in section `1.1` can be obtained.
|
|
|
148 |
2. If you want to reproduce the results in section `1.2`(**information extraction cases**), please run the following command (assuming that the LoRA weights of `ZhiXi` have been obtained according to the steps in section `2.3`, and the LoRA weights is saved in the `./lora` folder):
|
149 |
|
150 |
```shell
|
151 |
+
python examples/generate_lora.py --load_8bit --base_model ./knowlm --lora_weights ./lora --run_ie_cases
|
152 |
```
|
153 |
|
154 |
The result in section `1.2` can be obtained.
|
|
|
156 |
3. If you want to reproduce the results in section `1.3`(**general ablities cases**), please run the following command (assuming that the LoRA weights of `ZhiXi` have been obtained according to the steps in section `2.3`, and the LoRA weights is saved in the `./lora` folder):
|
157 |
|
158 |
```shell
|
159 |
+
python examples/generate_lora.py --load_8bit --base_model ./knowlm --lora_weights ./lora --run_general_cases
|
160 |
```
|
161 |
|
162 |
The result in section `1.3` can be obtained.
|
|
|
170 |
1. Use the following command to enter **command-line interaction**:
|
171 |
|
172 |
```shell
|
173 |
+
python examples/generate_finetune.py --base_model ./knowlm --interactive
|
174 |
```
|
175 |
|
176 |
The disadvantage is the inability to dynamically change decoding parameters.
|
|
|
178 |
2. Use the following command to enter **web-based interaction**:
|
179 |
|
180 |
```shell
|
181 |
+
python examples/generate_finetune_web.py --base_model ./knowlm
|
182 |
```
|
183 |
Here is a screenshot of the web-based interaction:
|
184 |
<p align="center" width="100%">
|
|
|
191 |
Here, we provide a web-based interaction method. Use the following command to access the web:
|
192 |
|
193 |
```shell
|
194 |
+
python examples/generate_lora_web.py --base_model ./knowlm --lora_weights ./lora
|
195 |
```
|
196 |
|
197 |
Here is a screenshot of the web-based interaction:
|