File size: 1,713 Bytes
cc151c5
 
 
 
16c77a2
d2466b8
 
cc151c5
d317a1a
4e65506
d317a1a
 
75f322d
 
 
 
 
d317a1a
 
4e65506
d317a1a
 
 
4fe621a
 
74bf068
4fe621a
 
 
991af57
cc151c5
 
16c77a2
d2466b8
16c77a2
d2466b8
5d7eb49
0e9dfc8
 
5d7eb49
9d893c5
991af57
9d893c5
 
 
 
 
 
991af57
 
387cb84
991af57
 
 
 
 
 
9d893c5
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
---
tags:
- autotrain
- text-generation
base_model: ahxt/llama2_xs_460M_experimental
datasets:
- KnutJaegersberg/WizardLM_evol_instruct_V2_196k_instruct_format
widget:
- text: |-
    ### Instruction:
    Find me a list of some nice places to visit around the world.
    
    ### Response:
- text: |-
    ### Instruction:
    Tell me a story about some magical place.
    
    ### Response:
- text: |-
    ### Instruction:
    Tell me all you know about the Earth.
    
    ### Response:
inference:
  parameters:
    max_new_tokens: 32
    repetition_penalty: 1.15
    do_sample: true
    temperature: 0.5
    top_p: 0.5
---

# ahxt's llama2_xs_460M_experimental trained on the WizardLM's Evol Instruct dataset using AutoTrain

- Base model: [ahxt/llama2_xs_460M_experimental](https://huggingface.co/ahxt/llama2_xs_460M_experimental)
- Dataset: [KnutJaegersberg/WizardLM_evol_instruct_V2_196k_instruct_format](https://huggingface.co/datasets/KnutJaegersberg/WizardLM_evol_instruct_V2_196k_instruct_format)
- [Training hyperparameters](https://huggingface.co/Felladrin/llama2_xs_460M_experimental_evol_instruct/blob/cc151c5669ea37c3ef972e375c74f2d9bfd92b49/training_params.json)
- Availability in other ML formats:
  - GGUF: [afrideva/llama2_xs_460M_experimental_evol_instruct-GGUF](https://huggingface.co/afrideva/llama2_xs_460M_experimental_evol_instruct-GGUF)
  - ONNX: [Felladrin/onnx-llama2_xs_460M_experimental_evol_instruct](https://huggingface.co/Felladrin/onnx-llama2_xs_460M_experimental_evol_instruct)

## Recommended Prompt Format

```
### Instruction:
<instruction>

### Response:
```

## Recommended Inference Parameters

```yml
repetition_penalty: 1.15
do_sample: true
temperature: 0.5
top_p: 0.5
```