File size: 2,028 Bytes
877c0d1 0845b13 877c0d1 0845b13 41fada5 0845b13 41fada5 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 |
---
license: llama2
pipeline_tag: text-generation
tags:
- llama
- llama2
---
# Introduction
Astramix is a merge of various Llama-2-7b finetune models, using [ties-merge method](https://github.com/cg123/ties-merge), thanks for [Chargoddard](https://huggingface.co/chargoddard).
Subsequently, lora merge script was used, created by [zarakiquemparte](https://github.com/zarakiquemparte/zaraki-tools) ([link](https://huggingface.co/zarakiquemparte) to the Hugging Face profile.)
Approximate rating for this model, noticed in short-time use:
* Great roleplay capabilities, limited only by model parameters.
* Poor quality reasoning, because of the model limitations.
* Almost censorship absence (yet some parts can be generated in the output)
Feel free to test the model.
Following base model used for merge: [Llama-2-7B-fp16](https://huggingface.co/TheBloke/Llama-2-7B-fp16)
#### List of models used for merge:
* [Nous-Hermes-llama-2-7b](https://huggingface.co/NousResearch/Nous-Hermes-llama-2-7b)
* [airoboros-l2-7b-2.1](https://huggingface.co/jondurbin/airoboros-l2-7b-2.1)
* [orca_mini_v3_7b ](https://huggingface.co/psmathur/orca_mini_v3_7b)
* [Platypus2-7B](https://huggingface.co/garage-bAInd/Platypus2-7B)
* [Tulpar-7b-v0](https://huggingface.co/HyperbeeAI/Tulpar-7b-v0)
Then, two LoRAs was merged into basemix model, using script specified above:
* [limarp-llama2-v2](https://huggingface.co/lemonilia/limarp-llama2-v2) (Licensed under AGPLv3)
* [airoboros-lmoe-7b-2.1](https://huggingface.co/jondurbin/airoboros-lmoe-7b-2.1) (Utilizing creative version)
#### I suggest using Alpaca instruct format:
```
### Instruction:
(your instruct prompt is here)
### Response: {prompt}
```
## Limitations and risks
Llama2 is licensed under LLama 2 Community License, various finetunes or (Q)LoRAs has appropriate licenses depending on used datasets in finetuning or training Low-Rank Adaptations.
This mix can generate heavily biased output, which aren't suitable for minors or common audience due, to using limarp in the merge. |