File size: 1,293 Bytes
cc151c5 16c77a2 d2466b8 cc151c5 d317a1a 4e65506 d317a1a 4e65506 d317a1a 4fe621a 74bf068 4fe621a 991af57 cc151c5 16c77a2 d2466b8 16c77a2 d2466b8 38e5366 9d893c5 991af57 9d893c5 991af57 9d893c5 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 |
---
tags:
- autotrain
- text-generation
base_model: ahxt/llama2_xs_460M_experimental
datasets:
- KnutJaegersberg/WizardLM_evol_instruct_V2_196k_instruct_format
widget:
- text: |-
### Instruction:
Find me a list of some nice places to visit around the world.
### Response:
- text: |-
### Instruction:
Tell me all you know about the Earth.
### Response:
inference:
parameters:
max_new_tokens: 32
repetition_penalty: 1.15
do_sample: true
temperature: 0.5
top_p: 0.5
---
# ahxt's llama2_xs_460M_experimental trained on the WizardLM's Evol Instruct dataset using AutoTrain
- Base model: [ahxt/llama2_xs_460M_experimental](https://huggingface.co/ahxt/llama2_xs_460M_experimental)
- Dataset: [KnutJaegersberg/WizardLM_evol_instruct_V2_196k_instruct_format](https://huggingface.co/datasets/KnutJaegersberg/WizardLM_evol_instruct_V2_196k_instruct_format)
- Training: 13.5h under [these parameters](https://huggingface.co/Felladrin/llama2_xs_460M_experimental_evol_instruct/blob/cc151c5669ea37c3ef972e375c74f2d9bfd92b49/training_params.json)
## Recommended Prompt Format
```
### Instruction:
<instruction>
### Response:
```
## Recommended Inference Parameters:
```yml
repetition_penalty: 1.15
do_sample: true
temperature: 0.5
top_p: 0.5
``` |