File size: 4,007 Bytes
14e6c10 26c238e 14e6c10 b9c92c5 14e6c10 9ba0a0b 7066072 9ba0a0b 7066072 9ba0a0b 7066072 9ba0a0b 7066072 9ba0a0b 08466d2 7066072 9ba0a0b 14e6c10 ea2b405 14e6c10 e1278b8 14e6c10 e1278b8 14e6c10 739b4fc 14e6c10 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 |
---
pipeline_tag: text-generation
inference: true
widget:
- text: 'def print_hello_world():'
example_title: Hello world
group: Python
license: bigcode-openrail-m
datasets:
- bigcode/commitpackft
- bigcode/oasst-octopack
metrics:
- code_eval
library_name: transformers
tags:
- code
model-index:
- name: OctoCoder
results:
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalSynthesize Python
metrics:
- name: pass@1
type: pass@1
value: 46.2
verified: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalSynthesize JavaScript
metrics:
- name: pass@1
type: pass@1
value: 39.2
verified: false
---
![Octopack](https://github.com/bigcode-project/octopack/blob/31f3320f098703c7910e43492c39366eeea68d83/banner.png?raw=true)
# OctoCoder
Play with the model on the [TODO Playground](https://huggingface.co/spaces/bigcode/bigcode-playground).
## Table of Contents
1. [Model Summary](##model-summary)
2. [Use](##use)
3. [Limitations](##limitations)
4. [Training](##training)
5. [License](##license)
6. [Citation](##citation)
## Model Summary
OctoCoder is an instruction tuned model with 15.5B parameters created by finetuning StarCoder on CommitPackFT & OASST as described in the OctoPack paper.
- **Repository:** [bigcode/octopack](https://github.com/bigcode-project/octopack)
- **Paper:** [TODO]()
- **Languages:** 80+ Programming languages
- **OctoPack🐙🎒:**
<table>
<tr>
<th>Data</t>
<th><a href=https://huggingface.co/datasets/bigcode/commitpack>CommitPack</a></th>
<td>4TB of GitHub commits across 350 programming languages</td>
</tr>
<tr>
<th></t>
<th><a href=https://huggingface.co/datasets/bigcode/commitpackft>CommitPackFT</a></th>
<td>Filtered version of CommitPack for high-quality commit messages that resemble instructions</td>
</tr>
<tr>
<th>Model</t>
<th><a href=https://huggingface.co/bigcode/octocoder>OctoCoder</a></th>
<td>StarCoder (16B parameters) instruction tuned on CommitPackFT + OASST</td>
</tr>
<tr>
<th></t>
<th><a href=https://huggingface.co/bigcode/octogeex>OctoGeeX</a></th>
<td>CodeGeeX2 (6B parameters) instruction tuned on CommitPackFT + OASST</td>
</tr>
<tr>
<th>Evaluation </t>
<th><a href=https://huggingface.co/datasets/bigcode/humanevalpack>HumanEvalPack</a></th>
<td>Extension of OpenAI's HumanEval to cover 3 scenarios across 6 languages</td>
</tr>
</table>
## Use
### Intended use
The model follows instructions provided in the input. We recommend prefacing your input with "Question: " and finishing with "Answer:", for example: "Question: Please write a function in Python that performs bubble sort.\n\nAnswer:"
**Feel free to share your generations in the Community tab!**
### Generation
```python
# pip install -q transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "bigcode/octocoder"
device = "cuda" # for GPU usage or "cpu" for CPU usage
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device)
inputs = tokenizer.encode("Question: Please write a function in Python that performs bubble sort.\n\nAnswer:", return_tensors="pt").to(device)
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
# Training
## Model
- **Architecture:** GPT-2 model with multi-query attention and Fill-in-the-Middle objective
- **Steps:** 250k pretraining & 30 instruction tuning
- **Pretraining tokens:** 1 trillion pretraining & 2M instruction tuning
- **Precision:** bfloat16
## Hardware
- **Pretraining:**
- **GPUs:** 512 Tesla A100
- **Training time:** 24 days
- **Instruction tuning:**
- **GPUs:** 8 Tesla A100
- **Training time:** 4 hours
## Software
- **Orchestration:** [Megatron-LM/Transformers](https://github.com/bigcode-project/octopack#training)
- **Neural networks:** [PyTorch](https://github.com/pytorch/pytorch)
# Citation
TODO |