TomPei commited on
Commit
83b0624
·
verified ·
1 Parent(s): 0477d54

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -0
README.md CHANGED
@@ -42,6 +42,7 @@ This is the repository for the base 13B version finetuned based on [CodeLlama-13
42
  | 7B | [opencsg/Opencsg-CodeLlama-7b-v0.1](https://huggingface.co/opencsg/opencsg-CodeLlama-7b-v0.1) |
43
  | 13B | [opencsg/Opencsg-CodeLlama-13b-v0.1](https://huggingface.co/opencsg/opencsg-CodeLlama-13b-v0.1) |
44
  | 34B | [opencsg/Opencsg-CodeLlama-34b-v0.1](https://huggingface.co/opencsg/opencsg-CodeLlama-34b-v0.1) |
 
45
 
46
 
47
  ## Model Eval
@@ -65,6 +66,7 @@ To simplify the comparison, we chosed the Pass@1 metric for the Python language,
65
  | opencsg-CodeLlama-13b-v0.1(4k) | **51.2%** |
66
  | CodeLlama-34b-hf | 48.2%|
67
  | opencsg-CodeLlama-34b-v0.1(4k)| **56.1%** |
 
68
 
69
  **TODO**
70
  - We will provide more benchmark scores on fine-tuned models in the future.
@@ -152,6 +154,8 @@ opencsg-CodeLlama-v0.1是一系列基于CodeLlama的通过全参数微调方法
152
  | 7B | [opencsg/Opencsg-CodeLlama-7b-v0.1](https://huggingface.co/opencsg/opencsg-CodeLlama-7b-v0.1) |
153
  | 13B | [opencsg/Opencsg-CodeLlama-13b-v0.1](https://huggingface.co/opencsg/opencsg-CodeLlama-13b-v0.1) |
154
  | 34B | [opencsg/Opencsg-CodeLlama-34b-v0.1](https://huggingface.co/opencsg/opencsg-CodeLlama-34b-v0.1) |
 
 
155
 
156
 
157
  ## 模型评估
@@ -175,6 +179,8 @@ HumanEval 是评估模型在代码生成方面性能的最常见的基准,尤
175
  | opencsg-CodeLlama-13b-v0.1 | **51.2%** |
176
  | CodeLlama-34b-hf | 48.2%|
177
  | opencsg-CodeLlama-34b-v0.1| **56.1%** |
 
 
178
 
179
  **TODO**
180
  - 未来我们将提供更多微调模型的在各基准上的分数。
 
42
  | 7B | [opencsg/Opencsg-CodeLlama-7b-v0.1](https://huggingface.co/opencsg/opencsg-CodeLlama-7b-v0.1) |
43
  | 13B | [opencsg/Opencsg-CodeLlama-13b-v0.1](https://huggingface.co/opencsg/opencsg-CodeLlama-13b-v0.1) |
44
  | 34B | [opencsg/Opencsg-CodeLlama-34b-v0.1](https://huggingface.co/opencsg/opencsg-CodeLlama-34b-v0.1) |
45
+ | 34B | [opencsg/Opencsg-CodeLlama-34b-v0.2](https://huggingface.co/opencsg/opencsg-CodeLlama-34b-v0.2) |
46
 
47
 
48
  ## Model Eval
 
66
  | opencsg-CodeLlama-13b-v0.1(4k) | **51.2%** |
67
  | CodeLlama-34b-hf | 48.2%|
68
  | opencsg-CodeLlama-34b-v0.1(4k)| **56.1%** |
69
+ | opencsg-CodeLlama-34b-v0.1(4k)| **64.0%** |
70
 
71
  **TODO**
72
  - We will provide more benchmark scores on fine-tuned models in the future.
 
154
  | 7B | [opencsg/Opencsg-CodeLlama-7b-v0.1](https://huggingface.co/opencsg/opencsg-CodeLlama-7b-v0.1) |
155
  | 13B | [opencsg/Opencsg-CodeLlama-13b-v0.1](https://huggingface.co/opencsg/opencsg-CodeLlama-13b-v0.1) |
156
  | 34B | [opencsg/Opencsg-CodeLlama-34b-v0.1](https://huggingface.co/opencsg/opencsg-CodeLlama-34b-v0.1) |
157
+ | 34B | [opencsg/Opencsg-CodeLlama-34b-v0.2](https://huggingface.co/opencsg/opencsg-CodeLlama-34b-v0.2) |
158
+
159
 
160
 
161
  ## 模型评估
 
179
  | opencsg-CodeLlama-13b-v0.1 | **51.2%** |
180
  | CodeLlama-34b-hf | 48.2%|
181
  | opencsg-CodeLlama-34b-v0.1| **56.1%** |
182
+ | opencsg-CodeLlama-34b-v0.1| **64.0%** |
183
+
184
 
185
  **TODO**
186
  - 未来我们将提供更多微调模型的在各基准上的分数。