|
--- |
|
inference: false |
|
language: |
|
- en |
|
pipeline_tag: text-generation |
|
tags: |
|
- llama |
|
- llama-2 |
|
license: mit |
|
--- |
|
|
|
# llama-2-supercot-lora |
|
|
|
Standard 8bit lora trained using [axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) on the [supercot](https://huggingface.co/datasets/kaiokendev/SuperCOT-dataset) dataset using Alpaca instruct format at 4096 context: |
|
|
|
``` |
|
Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request. |
|
|
|
### Instruction: |
|
<instruction> |
|
|
|
### Input: |
|
<any additional context. Remove this if it's not neccesary> |
|
|
|
### Response: |
|
<make sure to leave a single new-line here for optimal results> |
|
``` |
|
|
|
## Bias, Risks, and Limitations |
|
The model will show biases similar to those exhibited by the base model. It is not intended for supplying factual information or advice in any form. |