English
yintongl commited on
Commit
98b0ab7
·
verified ·
1 Parent(s): 88ca315

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -12
README.md CHANGED
@@ -16,18 +16,6 @@ Inference of this model is compatible with AutoGPTQ's Kernel.
16
 
17
 
18
 
19
-
20
- ### Evaluate the model
21
-
22
- Install [lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness.git) from source, we used the git id 96d185fa6232a5ab685ba7c43e45d1dbb3bb906d
23
-
24
- ```bash
25
- lm_eval --model hf --model_args pretrained="Intel/mpt-7b-chat-int4-inc",autogptq=True,gptq_use_triton=True --device cuda:0 --tasks lambada_openai,hellaswag,piqa,winogrande,truthfulqa_mc1,openbookqa,boolq,rte,arc_easy,arc_challenge,mmlu --batch_size 32
26
- ```
27
-
28
-
29
-
30
-
31
  ### Reproduce the model
32
 
33
  Here is the sample command to reproduce the model
@@ -53,6 +41,18 @@ python3 main.py \
53
 
54
 
55
 
 
 
 
 
 
 
 
 
 
 
 
 
56
  ## Caveats and Recommendations
57
 
58
  Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
 
16
 
17
 
18
 
 
 
 
 
 
 
 
 
 
 
 
 
19
  ### Reproduce the model
20
 
21
  Here is the sample command to reproduce the model
 
41
 
42
 
43
 
44
+ ### Evaluate the model
45
+
46
+ Install [lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness.git) from source, we used the git id 96d185fa6232a5ab685ba7c43e45d1dbb3bb906d
47
+
48
+ ```bash
49
+ lm_eval --model hf --model_args pretrained="Intel/mpt-7b-chat-int4-inc",autogptq=True,gptq_use_triton=True --device cuda:0 --tasks lambada_openai,hellaswag,piqa,winogrande,truthfulqa_mc1,openbookqa,boolq,rte,arc_easy,arc_challenge,mmlu --batch_size 32
50
+ ```
51
+
52
+
53
+
54
+
55
+
56
  ## Caveats and Recommendations
57
 
58
  Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.