File size: 9,895 Bytes
27b907a da9a399 27b907a da9a399 7ab4f09 da9a399 5a7888f da9a399 5a7888f da9a399 83b0624 da9a399 48a6781 da9a399 48a6781 da9a399 48a6781 da9a399 7ab4f09 da9a399 079af39 da9a399 079af39 da9a399 079af39 4920b2e da9a399 48a6781 da9a399 4f921f4 da9a399 acac735 da9a399 7ab4f09 1adce2f 7ab4f09 1adce2f 7ab4f09 83b0624 7ab4f09 079af39 7ab4f09 079af39 7ab4f09 079af39 4920b2e 83b0624 7ab4f09 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 |
---
language:
- code
pipeline_tag: text-generation
tags:
- llama-2
license: llama2
---
# **Opencsg-CodeLlama-13b-v0.1** [[中文]](#chinese) [[English]](#english)
<a id="english"></a>
<p align="center">
<img width="300px" alt="OpenCSG" src="https://cdn-uploads.huggingface.co/production/uploads/64c71b27d43e4dee51a8b31a/GwYXPKuEoGCGcMICeW-sb.jpeg">
</p>
<p align="center"><a href="https://portal.opencsg.com/models">[OpenCSG Community]</a> <a href="https://github.com/opencsgs">[github]</a> <a href="https://cdn-uploads.huggingface.co/production/uploads/64c71b27d43e4dee51a8b31a/HU6vz21qKTEmUBCWqCFh9.jpeg">[wechat]</a> <a href="https://twitter.com/OpenCsg">[Twitter]</a> </p>
</div>
OpenCSG stands for Converged resources, Software refinement, and Generative LM. The 'C' represents Converged resources, indicating the integration and full utilization of hybrid resources. The 'S' stands for Software refinement, signifying software that is refined by large models. The 'G' represents Generative LM, which denotes widespread, inclusive, and democratized generative large models.
The vision of OpenCSG is to empower every industry, every company, and every individual to own their models. We adhere to the principles of openness and open source, making the large model software stack of OpenCSG available to the community. We welcome everyone to use, send feedback, and contribute collaboratively.
## Model Description
CodeLlama is a collection of pretrained and fine-tuned generative text models from Llama2, which ranges in scale from 7 billion to 34 billion parameters.
Based on CodeLlama, opencsg-CodeLlama-v0.1 is a series of models fintuned througth full-paramters fine-tuning method.
<br>
This is the repository for the base 13B version finetuned based on [CodeLlama-13b-hf](https://huggingface.co/codellama/CodeLlama-13b-hf).
| Model Size | Base Model |
| --- | ----------------------------------------------------------------------------- |
| 7B | [opencsg/Opencsg-CodeLlama-7b-v0.1](https://huggingface.co/opencsg/opencsg-CodeLlama-7b-v0.1) |
| 13B | [opencsg/Opencsg-CodeLlama-13b-v0.1](https://huggingface.co/opencsg/opencsg-CodeLlama-13b-v0.1) |
| 34B | [opencsg/Opencsg-CodeLlama-34b-v0.1](https://huggingface.co/opencsg/opencsg-CodeLlama-34b-v0.1) |
| 34B | [opencsg/Opencsg-CodeLlama-34b-v0.2](https://huggingface.co/opencsg/opencsg-CodeLlama-34b-v0.2) |
## Model Eval
HumanEval is the most common code generation benchmark for evaluating model performance, especially on the compeltion of code exercise cases.
Model evaluation is, to some extent, a metaphysics. Different models have different sensitivities to decoding methods, parameters and instructions.
It is impratical for us to manually set specific configurations for each fine-tuned model, because a real LLM should master general capabilities despite the parameters being manipulated by users.
Therefore, OpenCSG racked their brains to provide a relatively fair method to compare the fine-tuned models on the HumanEval benchmark.
To simplify the comparison, we chosed the Pass@1 metric for the Python language, but our fine-tuning dataset includes samples in multiple languages.
**For fairness, we evaluated the original and fine-tuned CodeLlama models based only on the prompts from the original cases, without including any other instructions.**
**Besides, we use the greedy decoding method for each model during evaluation.**
| Model | HumanEval python pass@1 |
| --- |----------------------------------------------------------------------------- |
| CodeLlama-7b-hf | 30.5%|
| **opencsg-CodeLlama-7b-v0.1** | **43.9%** |
| CodeLlama-13b-hf | 36.0%|
| **opencsg-CodeLlama-13b-v0.1** | **51.2%** |
| CodeLlama-34b-hf | 48.2%|
| **opencsg-CodeLlama-34b-v0.1**| **56.1%** |
| **opencsg-CodeLlama-34b-v0.2**| **64.0%** |
| CodeLlama-70b-hf| 53.0% |
| CodeLlama-70b-Instruct-hf| **67.8%** |
**TODO**
- We will provide more benchmark scores on fine-tuned models in the future.
- We will provide different practical problems to evaluate the performance of fine-tuned models in the field of software engineering.
# Model Usage
```python
from transformers import AutoTokenizer
import transformers
import torch
model = "opencsg/opencsg-CodeLlama-13b-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model, trust_remote_code=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
input_text = "#write a quick sort algorithm."
sequences = pipeline(
input_text,
do_sample=False,
top_k=10,
temperature=0.1,
top_p=0.95,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
max_length=256,
)
for seq in sequences:
print(seq['generated_text'][len(input_text):])
```
# Training
## Hardware
- **GPUs:** 8 Tesla A800
- **Training time:** 4 hours
## Software
- **Orchestration:** [Deepspeed](https://github.com/OpenCSGs)
- **Neural networks:** [PyTorch](https://github.com/pytorch/pytorch)
- **BP16 if applicable:** [apex](https://github.com/NVIDIA/apex)
<a id="chinese"></a>
<p>
</p>
# OpenCSG介绍
<p align="center">
<img width="300px" alt="OpenCSG" src="https://cdn-uploads.huggingface.co/production/uploads/64c71b27d43e4dee51a8b31a/GwYXPKuEoGCGcMICeW-sb.jpeg">
</p>
<p align="center"><a href="https://opencsg.com/models">[OpenCSG 社区]</a> <a href="https://github.com/opencsgs">[github]</a> <a href="https://cdn-uploads.huggingface.co/production/uploads/64c71b27d43e4dee51a8b31a/HU6vz21qKTEmUBCWqCFh9.jpeg">[微信]</a> <a href="https://twitter.com/OpenCsg">[推特]</a> </p>
</div>
OpenCSG中 Open是开源开放;C 代表 Converged resources,整合和充分利用的混合异构资源优势,算力降本增效;S 代表 Software refined,重新定义软件的交付方式,通过大模型驱动软件开发,人力降本增效;G 代表 Generative LM,大众化、普惠化和民主化的可商用的开源生成式大模型。
OpenCSG的愿景是让每个行业、每个公司、每个人都拥有自己的模型。 我们坚持开源开放的原则,将OpenCSG的大模型软件栈开源到社区,欢迎使用、反馈和参与共建,欢迎关注
## 模型介绍
CodeLlama 是一系列由 Llama2 经过预训练和微调得到的生成式代码大模型,其规模从 70 亿到 340 亿个参数不等。
opencsg-CodeLlama-v0.1是一系列基于CodeLlama的通过全参数微调方法进行调优的模型。
<br>
这是基于 [CodeLlama-13b-hf](https://huggingface.co/codellama/CodeLlama-13b-hf) 进行微调的模型版本。
| 模型大小 | 基座模型 |
| --- | ----------------------------------------------------------------------------- |
| 7B | [opencsg/Opencsg-CodeLlama-7b-v0.1](https://huggingface.co/opencsg/opencsg-CodeLlama-7b-v0.1) |
| 13B | [opencsg/Opencsg-CodeLlama-13b-v0.1](https://huggingface.co/opencsg/opencsg-CodeLlama-13b-v0.1) |
| 34B | [opencsg/Opencsg-CodeLlama-34b-v0.1](https://huggingface.co/opencsg/opencsg-CodeLlama-34b-v0.1) |
| 34B | [opencsg/Opencsg-CodeLlama-34b-v0.2](https://huggingface.co/opencsg/opencsg-CodeLlama-34b-v0.2) |
## 模型评估
HumanEval 是评估模型在代码生成方面性能的最常见的基准,尤其是在代码习题的补全方面。
模型评估在某种程度上是一种玄学。不同的模型对解码方法、参数和指令的敏感度不同,
优秀的大模型是具备通用能力的,而不会因为解码参数的调整使得模型的生成表现有很大的差异。
因此,OpenCSG 提供了一个相对公平的方法来在 HumanEval 基准上比较各微调模型。
方便起见,我们选择了Python语言Pass@1指标,但要注意的是,我们的微调数据集是包含多种编程语言。
**为了公平起见,我们仅根据原始问题的提示来评估原始和微调过的 CodeLlama 模型,不包含任何其他说明。**
**除此之外,我们在评估过程中对每个模型都使用贪婪解码方法。**
| 模型 | HumanEval python pass@1 |
| --- |----------------------------------------------------------------------------- |
| CodeLlama-7b-hf | 30.5%|
| **opencsg-CodeLlama-7b-v0.1** | **43.9%** |
| CodeLlama-13b-hf | 36.0%|
| **opencsg-CodeLlama-13b-v0.1** | **51.2%** |
| CodeLlama-34b-hf | 48.2%|
| **opencsg-CodeLlama-34b-v0.1**| **56.1%** |
| **opencsg-CodeLlama-34b-v0.2**| **64.0%** |
| CodeLlama-70b-hf| 53.0% |
| CodeLlama-70b-Instruct-hf| **67.8%** |
**TODO**
- 未来我们将提供更多微调模型的在各基准上的分数。
- 我们将提供不同的实际问题来评估微调模型在软件工程领域的性能。
# 模型使用
```python
from transformers import AutoTokenizer
import transformers
import torch
model = "opencsg/opencsg-CodeLlama-13b-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model, trust_remote_code=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
input_text = "#write a quick sort algorithm."
sequences = pipeline(
input_text,
do_sample=False,
top_k=10,
temperature=0.1,
top_p=0.95,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
max_length=256,
)
for seq in sequences:
print(seq['generated_text'][len(input_text):])
```
# 训练
## 硬件资源
- **GPU数量:** 8 Tesla A800
- **训练时间:** 4 小时
## 软件使用
- **微调训练框架:** [Deepspeed](https://github.com/OpenCSGs)
- **深度学习框架:** [PyTorch](https://github.com/pytorch/pytorch)
- **BP16:** [apex](https://github.com/NVIDIA/apex)
|