hiyouga commited on
Commit
0509a1e
·
1 Parent(s): da2cd76

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -14,9 +14,9 @@ tags:
14
 
15
  This is the LLaMAfied version of [Baichuan2-7B-Chat](https://huggingface.co/baichuan-inc/Baichuan2-7B-Chat) model by Baichuan Inc.
16
 
17
- This model is converted with https://github.com/hiyouga/LLaMA-Efficient-Tuning/blob/main/tests/llamafy_baichuan2.py
18
 
19
- You may use this model for fine-tuning in downstream tasks, we recommend using our efficient fine-tuning toolkit. https://github.com/hiyouga/LLaMA-Efficient-Tuning
20
 
21
  - **Developed by:** Baichuan Inc.
22
  - **Language(s) (NLP):** Chinese/English
@@ -37,7 +37,7 @@ inputs = inputs.to("cuda")
37
  generate_ids = model.generate(**inputs, max_new_tokens=256, streamer=streamer)
38
  ```
39
 
40
- You could also alternatively launch a CLI demo by using the script in [LLaMA-Efficient-Tuning](https://github.com/hiyouga/LLaMA-Efficient-Tuning)
41
 
42
  ```bash
43
  python src/cli_demo.py --template baichuan2 --model_name_or_path hiyouga/Baichuan2-7B-Chat-LLaMAfied
 
14
 
15
  This is the LLaMAfied version of [Baichuan2-7B-Chat](https://huggingface.co/baichuan-inc/Baichuan2-7B-Chat) model by Baichuan Inc.
16
 
17
+ This model is converted with https://github.com/hiyouga/LLaMA-Factory/blob/main/tests/llamafy_baichuan2.py
18
 
19
+ You may use this model for fine-tuning in downstream tasks, we recommend using our efficient fine-tuning toolkit. https://github.com/hiyouga/LLaMA-Factory
20
 
21
  - **Developed by:** Baichuan Inc.
22
  - **Language(s) (NLP):** Chinese/English
 
37
  generate_ids = model.generate(**inputs, max_new_tokens=256, streamer=streamer)
38
  ```
39
 
40
+ You could also alternatively launch a CLI demo by using the script in [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory)
41
 
42
  ```bash
43
  python src/cli_demo.py --template baichuan2 --model_name_or_path hiyouga/Baichuan2-7B-Chat-LLaMAfied