Update README.md
Browse files
README.md
CHANGED
@@ -30,6 +30,10 @@ Then, two LoRAs was merged into basemix model, using script specified above:
|
|
30 |
* [limarp-llama2-v2](https://huggingface.co/lemonilia/limarp-llama2-v2) (Licensed under AGPLv3)
|
31 |
* [airoboros-lmoe-7b-2.1](https://huggingface.co/jondurbin/airoboros-lmoe-7b-2.1) (Utilizing creative version)
|
32 |
|
|
|
|
|
|
|
|
|
33 |
#### I suggest using Alpaca instruct format:
|
34 |
```
|
35 |
### Instruction:
|
@@ -37,5 +41,5 @@ Then, two LoRAs was merged into basemix model, using script specified above:
|
|
37 |
### Response: {prompt}
|
38 |
```
|
39 |
## Limitations and risks
|
40 |
-
Llama2 is licensed under LLama 2 Community License, various finetunes or (Q)LoRAs has appropriate licenses depending on used datasets in finetuning or training Low-Rank Adaptations.
|
41 |
This mix can generate heavily biased output, which aren't suitable for minors or common audience due to using limarp in the merge.
|
|
|
30 |
* [limarp-llama2-v2](https://huggingface.co/lemonilia/limarp-llama2-v2) (Licensed under AGPLv3)
|
31 |
* [airoboros-lmoe-7b-2.1](https://huggingface.co/jondurbin/airoboros-lmoe-7b-2.1) (Utilizing creative version)
|
32 |
|
33 |
+
**Here is quantized versions of the model:**
|
34 |
+
* [GGUF fp16 and Q5_K_M](https://huggingface.co/androlike/astramix_l2_7b_gguf)
|
35 |
+
* [GPTQ 4bit 128g](https://huggingface.co/androlike/astramix_l2_7b_4bit_128g_gptq)
|
36 |
+
|
37 |
#### I suggest using Alpaca instruct format:
|
38 |
```
|
39 |
### Instruction:
|
|
|
41 |
### Response: {prompt}
|
42 |
```
|
43 |
## Limitations and risks
|
44 |
+
Llama2 and its derivatives (finetunes) is licensed under LLama 2 Community License, various finetunes or (Q)LoRAs has appropriate licenses depending on used datasets in finetuning or training Low-Rank Adaptations.
|
45 |
This mix can generate heavily biased output, which aren't suitable for minors or common audience due to using limarp in the merge.
|