File size: 15,599 Bytes
71312ef 2dc38fa 71312ef 1bdf0bc 71312ef 7d8cbe0 71312ef 255acd6 2dc38fa a5b20eb 255acd6 2dc38fa 71312ef 983cd00 71312ef 6be0563 71312ef 983cd00 71312ef 983cd00 71312ef 983cd00 71312ef 983cd00 71312ef |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 |
---
inference: false
pipeline_tag: text-generation
widget:
- text: 'def print_hello_world():'
example_title: Hello world
group: Python
- text: 'Gradient descent is'
example_title: Machine Learning
group: English
- license: bigcode-openrail-m
datasets:
- bigcode/the-stack-dedup
- tiiuae/falcon-refinedweb
- ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered
- QingyiSi/Alpaca-CoT
- teknium/GPTeacher-General-Instruct
- metaeval/ScienceQA_text_only
- hellaswag
- openai/summarize_from_feedback
- riddle_sense
- gsm8k
- camel-ai/math
- camel-ai/biology
- camel-ai/physics
- camel-ai/chemistry
- winglian/evals
metrics:
- code_eval
- mmlu
- arc
- hellaswag
- truthfulqa
library_name: transformers
tags:
- code
extra_gated_prompt: >-
## Model License Agreement
Please read the BigCode [OpenRAIL-M
license](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement)
agreement before accepting it.
extra_gated_fields:
I accept the above license agreement, and will use the Model complying with the set of use restrictions and sharing requirements: checkbox
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# OpenAccess AI Collective's Minotaur 15B GGML
These files are GGML format model files for [OpenAccess AI Collective's Minotaur 15B](https://huggingface.co/openaccess-ai-collective/minotaur-15b).
Please note that these GGMLs are **not compatible with llama.cpp, or currently with text-generation-webui**. Please see below for a list of tools known to work with these model files.
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/minotaur-15B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/minotaur-15B-GGML)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/openaccess-ai-collective/minotaur-15b)
## A note regarding context length: 8K
It is confirmed that the 8K context of this model works in [KoboldCpp](https://github.com/LostRuins/koboldcpp), if you manually set max context to 8K by adjusting the text box above the slider:
![.](https://s3.amazonaws.com/moonup/production/uploads/63cd4b6d1c8a5d1d7d76a778/LcoIOa7YdDZa-R-R4BWYw.png)
(set it to 8192 at most)
It is currently unknown as to whether it is compatible with other clients.
If you have feedback on this, please let me know.
## Prompt template
```
USER: <prompt>
ASSISTANT:
```
<!-- compatibility_ggml start -->
## Compatibilty
These files are **not** compatible with text-generation-webui, llama.cpp, or llama-cpp-python.
Currently they can be used with:
* KoboldCpp, a powerful inference engine based on llama.cpp, with good UI: [KoboldCpp](https://github.com/LostRuins/koboldcpp)
* The ctransformers Python library, which includes LangChain support: [ctransformers](https://github.com/marella/ctransformers)
* The LoLLMS Web UI which uses ctransformers: [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
* [rustformers' llm](https://github.com/rustformers/llm)
* The example `starcoder` binary provided with [ggml](https://github.com/ggerganov/ggml)
As other options become available I will endeavour to update them here (do let me know in the Community tab if I've missed something!)
## Tutorial for using LoLLMS Web UI
* [Text tutorial, written by **Lucas3DCG**](https://huggingface.co/TheBloke/MPT-7B-Storywriter-GGML/discussions/2#6475d914e9b57ce0caa68888)
* [Video tutorial, by LoLLMS Web UI's author **ParisNeo**](https://www.youtube.com/watch?v=ds_U0TDzbzI)
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| minotaur-15b.ggmlv3.q4_0.bin | q4_0 | 4 | 10.75 GB | 13.25 GB | Original llama.cpp quant method, 4-bit. |
| minotaur-15b.ggmlv3.q4_1.bin | q4_1 | 4 | 11.92 GB | 14.42 GB | Original llama.cpp quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
| minotaur-15b.ggmlv3.q5_0.bin | q5_0 | 5 | 13.09 GB | 15.59 GB | Original llama.cpp quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
| minotaur-15b.ggmlv3.q5_1.bin | q5_1 | 5 | 14.26 GB | 16.76 GB | Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
| minotaur-15b.ggmlv3.q8_0.bin | q8_0 | 8 | 20.11 GB | 22.61 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
**Patreon special mentions**: vamX, K, Jonathan Leane, Lone Striker, Sean Connelly, Chris McCloskey, WelcomeToTheClub, Nikolai Manek, John Detwiler, Kalila, David Flickinger, Fen Risland, subjectnull, Johann-Peter Hartmann, Talal Aujan, John Villwock, senxiiz, Khalefa Al-Ahmad, Kevin Schuppel, Alps Aficionado, Derek Yates, Mano Prime, Nathan LeClaire, biorpg, trip7s trip, Asp the Wyvern, chris gileta, Iucharbius , Artur Olbinski, Ai Maven, Joseph William Delisle, Luke Pendergrass, Illia Dulskyi, Eugene Pentland, Ajan Kanaga, Willem Michiel, Space Cruiser, Pyrater, Preetika Verma, Junyu Yang, Oscar Rangel, Spiking Neurons AB, Pierre Kircher, webtim, Cory Kujawski, terasurfer , Trenton Dambrowitz, Gabriel Puliatti, Imad Khwaja, Luke.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: OpenAccess AI Collective's Minotaur 15B
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
**[💵 Donate to OpenAccess AI Collective](https://github.com/sponsors/OpenAccess-AI-Collective) to help us keep building great tools and models!**
# Minotaur 15B 8K
Minotaur 15B is an instruct fine-tuned model on top of Starcoder Plus. Minotaur 15B is fine-tuned **on only completely open datasets** making this model reproducible by anyone.
Minotaur 15B has a context length of 8K tokens, allowing for strong recall at long contexts.
Questions, comments, feedback, looking to donate, or want to help? Reach out on our [Discord](https://discord.gg/PugNNHAF5r) or email [[email protected]](mailto:[email protected])
# Prompts
Chat only style prompts using `USER:`,`ASSISTANT:`.
<img src="https://huggingface.co/openaccess-ai-collective/minotaur-13b/resolve/main/minotaur.png" alt="minotaur" width="600" height="500"/>
# Training Datasets
Minotaur 15B model is fine-tuned on the following openly available datasets:
- [WizardLM](https://huggingface.co/datasets/ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered)
- [subset of QingyiSi/Alpaca-CoT for roleplay and CoT](https://huggingface.co/QingyiSi/Alpaca-CoT)
- [GPTeacher-General-Instruct](https://huggingface.co/datasets/teknium/GPTeacher-General-Instruct)
- [metaeval/ScienceQA_text_only](https://huggingface.co/datasets/metaeval/ScienceQA_text_only) - instruct for concise responses
- [openai/summarize_from_feedback](https://huggingface.co/datasets/openai/summarize_from_feedback) - instruct augmented tl;dr summarization
- [camel-ai/math](https://huggingface.co/datasets/camel-ai/math)
- [camel-ai/physics](https://huggingface.co/datasets/camel-ai/physics)
- [camel-ai/chemistry](https://huggingface.co/datasets/camel-ai/chemistry)
- [camel-ai/biology](https://huggingface.co/datasets/camel-ai/biology)
- [winglian/evals](https://huggingface.co/datasets/winglian/evals) - instruct augmented datasets
- custom sysnthetic datasets around misconceptions, in-context qa, jokes, N-tasks problems, and context-insensitivity
- ARC-Easy & ARC-Challenge - instruct augmented for detailed responses, derived from the `train` split
- [hellaswag](https://huggingface.co/datasets/hellaswag) - 30K+ rows of instruct augmented for detailed explanations w 30K+ rows, derived from the `train` split
- [riddle_sense](https://huggingface.co/datasets/riddle_sense) - instruct augmented, derived from the `train` split
- [gsm8k](https://huggingface.co/datasets/gsm8k) - instruct augmented, derived from the `train` split
- prose generation
# Shoutouts
Special thanks to Nanobit for helping with Axolotl and TheBloke for quantizing these models are more accessible to all.
# Demo
HF Demo in Spaces available in the [Community ChatBot Arena](https://huggingface.co/spaces/openaccess-ai-collective/rlhf-arena) under the OAAIC Chatbots tab.
## Release Notes
- https://wandb.ai/wing-lian/minotaur-16b-8k/runs/tshgbl2k
## Build
Minotaur was built with [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) on 4XA100 80GB
- 1 epochs taking approximately 30 hours
- Trained using QLoRA techniques
## Bias, Risks, and Limitations
Minotaur has not been aligned to human preferences with techniques like RLHF or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so).
Minotaur was fine-tuned from the base model StarCoder, please refer to its model card's Limitations Section for relevant information. (included below)
## Benchmarks
TBD
## Examples
TBD
# StarCoderPlus
Play with the instruction-tuned StarCoderPlus at [StarChat-Beta](https://huggingface.co/spaces/HuggingFaceH4/starchat-playground).
## Table of Contents
1. [Model Summary](##model-summary)
2. [Use](##use)
3. [Limitations](##limitations)
4. [Training](##training)
5. [License](##license)
6. [Citation](##citation)
## Model Summary
StarCoderPlus is a fine-tuned version of [StarCoderBase](https://huggingface.co/bigcode/starcoderbase) on 600B tokens from the English web dataset [RedefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb)
combined with [StarCoderData](https://huggingface.co/datasets/bigcode/starcoderdata) from [The Stack (v1.2)](https://huggingface.co/datasets/bigcode/the-stack) and a Wikipedia dataset.
It's a 15.5B parameter Language Model trained on English and 80+ programming languages. The model uses [Multi Query Attention](https://arxiv.org/abs/1911.02150),
[a context window of 8192 tokens](https://arxiv.org/abs/2205.14135), and was trained using the [Fill-in-the-Middle objective](https://arxiv.org/abs/2207.14255) on 1.6 trillion tokens.
- **Repository:** [bigcode/Megatron-LM](https://github.com/bigcode-project/Megatron-LM)
- **Project Website:** [bigcode-project.org](https://www.bigcode-project.org)
- **Point of Contact:** [[email protected]](mailto:[email protected])
- **Languages:** English & 80+ Programming languages
## Use
### Intended use
The model was trained on English and GitHub code. As such it is _not_ an instruction model and commands like "Write a function that computes the square root." do not work well. However, the instruction-tuned version in [StarChat](hhttps://huggingface.co/spaces/HuggingFaceH4/starchat-playground) makes a capable assistant.
**Feel free to share your generations in the Community tab!**
### Generation
```python
# pip install -q transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "bigcode/starcoderplus"
device = "cuda" # for GPU usage or "cpu" for CPU usage
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device)
inputs = tokenizer.encode("def print_hello_world():", return_tensors="pt").to(device)
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
### Fill-in-the-middle
Fill-in-the-middle uses special tokens to identify the prefix/middle/suffix part of the input and output:
```python
input_text = "<fim_prefix>def print_hello_world():\n <fim_suffix>\n print('Hello world!')<fim_middle>"
inputs = tokenizer.encode(input_text, return_tensors="pt").to(device)
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
### Attribution & Other Requirements
The training code dataset of the model was filtered for permissive licenses only. Nevertheless, the model can generate source code verbatim from the dataset. The code's license might require attribution and/or other specific requirements that must be respected. We provide a [search index](https://huggingface.co/spaces/bigcode/starcoder-search) that let's you search through the pretraining data to identify where generated code came from and apply the proper attribution to your code.
# Limitations
The model has been trained on a mixture of English text from the web and GitHub code. Therefore it might encounter limitations when working with non-English text, and can carry the stereotypes and biases commonly encountered online.
Additionally, the generated code should be used with caution as it may contain errors, inefficiencies, or potential vulnerabilities. For a more comprehensive understanding of the base model's code limitations, please refer to See [StarCoder paper](hhttps://arxiv.org/abs/2305.06161).
# Training
StarCoderPlus is a fine-tuned version on 600B English and code tokens of StarCoderBase, which was pre-trained on 1T code tokens. Below are the fine-tuning details:
## Model
- **Architecture:** GPT-2 model with multi-query attention and Fill-in-the-Middle objective
- **Finetuning steps:** 150k
- **Finetuning tokens:** 600B
- **Precision:** bfloat16
## Hardware
- **GPUs:** 512 Tesla A100
- **Training time:** 14 days
## Software
- **Orchestration:** [Megatron-LM](https://github.com/bigcode-project/Megatron-LM)
- **Neural networks:** [PyTorch](https://github.com/pytorch/pytorch)
- **BP16 if applicable:** [apex](https://github.com/NVIDIA/apex)
# License
The model is licensed under the BigCode OpenRAIL-M v1 license agreement. You can find the full agreement [here](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement).
|