TomPei commited on
Commit
4920b2e
·
verified ·
1 Parent(s): 83b0624

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -4
README.md CHANGED
@@ -61,12 +61,14 @@ To simplify the comparison, we chosed the Pass@1 metric for the Python language,
61
  | Model | HumanEval python pass@1 |
62
  | --- |----------------------------------------------------------------------------- |
63
  | CodeLlama-7b-hf | 30.5%|
64
- | opencsg-CodeLlama-7b-v0.1(4k) | **43.9%** |
65
  | CodeLlama-13b-hf | 36.0%|
66
- | opencsg-CodeLlama-13b-v0.1(4k) | **51.2%** |
67
  | CodeLlama-34b-hf | 48.2%|
68
- | opencsg-CodeLlama-34b-v0.1(4k)| **56.1%** |
69
- | opencsg-CodeLlama-34b-v0.1(4k)| **64.0%** |
 
 
70
 
71
  **TODO**
72
  - We will provide more benchmark scores on fine-tuned models in the future.
@@ -180,6 +182,8 @@ HumanEval 是评估模型在代码生成方面性能的最常见的基准,尤
180
  | CodeLlama-34b-hf | 48.2%|
181
  | opencsg-CodeLlama-34b-v0.1| **56.1%** |
182
  | opencsg-CodeLlama-34b-v0.1| **64.0%** |
 
 
183
 
184
 
185
  **TODO**
 
61
  | Model | HumanEval python pass@1 |
62
  | --- |----------------------------------------------------------------------------- |
63
  | CodeLlama-7b-hf | 30.5%|
64
+ | opencsg-CodeLlama-7b-v0.1 | **43.9%** |
65
  | CodeLlama-13b-hf | 36.0%|
66
+ | opencsg-CodeLlama-13b-v0.1 | **51.2%** |
67
  | CodeLlama-34b-hf | 48.2%|
68
+ | opencsg-CodeLlama-34b-v0.1| **56.1%** |
69
+ | opencsg-CodeLlama-34b-v0.1| **64.0%** |
70
+ | CodeLlama-70b-hf| 53.0% |
71
+ | CodeLlama-70b-Instruct-hf| **67.8%** |
72
 
73
  **TODO**
74
  - We will provide more benchmark scores on fine-tuned models in the future.
 
182
  | CodeLlama-34b-hf | 48.2%|
183
  | opencsg-CodeLlama-34b-v0.1| **56.1%** |
184
  | opencsg-CodeLlama-34b-v0.1| **64.0%** |
185
+ | CodeLlama-70b-hf| 53.0% |
186
+ | CodeLlama-70b-Instruct-hf| **67.8%** |
187
 
188
 
189
  **TODO**