English
yintongl commited on
Commit
6d3c73a
·
verified ·
1 Parent(s): ccb600b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +26 -18
README.md CHANGED
@@ -15,6 +15,32 @@ Inference of this model is compatible with AutoGPTQ's Kernel.
15
 
16
 
17
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
18
  ### Evaluate the model
19
 
20
  Install [lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness.git) from source, we used the git id 96d185fa6232a5ab685ba7c43e45d1dbb3bb906d
@@ -41,25 +67,7 @@ lm_eval --model hf --model_args pretrained="Intel/Phi-3-mini-4k-instruct-int4-in
41
 
42
 
43
 
44
- ### Reproduce the model
45
-
46
- Here is the sample command to reproduce the model
47
-
48
- ```bash
49
- git clone https://github.com/intel/auto-round
50
- cd auto-round/examples/language-modeling
51
- pip install -r requirements.txt
52
- python3 main.py \
53
- --model_name microsoft/Phi-3-mini-4k-instruct \
54
- --device 0 \
55
- --group_size 128 \
56
- --bits 4 \
57
- --iters 1000 \
58
- --deployment_device 'gpu' \
59
- --disable_quanted_input \
60
- --output_dir "./tmp_autoround" \
61
 
62
- ```
63
 
64
 
65
 
 
15
 
16
 
17
 
18
+
19
+
20
+ ### Reproduce the model
21
+
22
+ Here is the sample command to reproduce the model
23
+
24
+ ```bash
25
+ git clone https://github.com/intel/auto-round
26
+ cd auto-round/examples/language-modeling
27
+ pip install -r requirements.txt
28
+ python3 main.py \
29
+ --model_name microsoft/Phi-3-mini-4k-instruct \
30
+ --device 0 \
31
+ --group_size 128 \
32
+ --bits 4 \
33
+ --iters 1000 \
34
+ --deployment_device 'gpu' \
35
+ --disable_quanted_input \
36
+ --output_dir "./tmp_autoround" \
37
+
38
+ ```
39
+
40
+
41
+
42
+
43
+
44
  ### Evaluate the model
45
 
46
  Install [lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness.git) from source, we used the git id 96d185fa6232a5ab685ba7c43e45d1dbb3bb906d
 
67
 
68
 
69
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
70
 
 
71
 
72
 
73