File size: 1,013 Bytes
febefe0 92cce68 febefe0 502b52a febefe0 502b52a febefe0 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Open Orca Llama 3 8B
- **Fine Tuned using dataset:** https://huggingface.co/datasets/Open-Orca/OpenOrca
- **Step Count:** 1000
- **Batch Size:** 2
- **Gradient Accumulation Steps:** 4
- **Context Size:** 8192
- **Num examples:** 4,233,923
- **Trainable Parameters:** 41,943,040
- **Learning Rate:** 0.0625
- **Training Loss:** 1.090800
- **Fined Tuned using:** Google Colab Pro (Nvidia L4 runtime)
- **Developed by:** akumaburn
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
Some GGUF quantizations are included as well.
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|