|
--- |
|
tags: |
|
- merge |
|
- mergekit |
|
- lazymergekit |
|
- GreenNode/GreenNode-mini-7B-multilingual-v1olet |
|
- KoboldAI/Mistral-7B-Holodeck-1 |
|
base_model: |
|
- GreenNode/GreenNode-mini-7B-multilingual-v1olet |
|
- KoboldAI/Mistral-7B-Holodeck-1 |
|
--- |
|
|
|
# HoloViolet-7B-test5 |
|
The best version of HoloViolet. At this point it seems outclassed by twizzler, but I still love it for its proactive writing and sometimes unexpected outputs. |
|
|
|
Update: quants available over [here](https://huggingface.co/mradermacher/HoloViolet-7B-GGUF), kudos to mradermacher. |
|
|
|
A very discriptive model, harnessing the literary benefits of KoboldAI's Mistral Holodeck, but less schizo. |
|
Manages to get an understanding of the situation, doesn't ignore context nearly as much, while expanding on it creatively. |
|
It's not very subtle about telling you a character's intentions, as it is still a 7B, but it writes well imo. |
|
GreenNode V1olet is a great model for supplying smarts since it doesn't gravitate towards GPT'isms nearly as much as the other smart mistral tunes. |
|
Use Roleplay prompt preset on sillytavern, I find simple prompts work better with these smaller models. |
|
|
|
HoloViolet-7B-test5 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): |
|
* [GreenNode/GreenNode-mini-7B-multilingual-v1olet](https://huggingface.co/GreenNode/GreenNode-mini-7B-multilingual-v1olet) |
|
* [KoboldAI/Mistral-7B-Holodeck-1](https://huggingface.co/KoboldAI/Mistral-7B-Holodeck-1) |
|
|
|
## 🧩 Configuration |
|
|
|
```yaml |
|
slices: |
|
- sources: |
|
- model: GreenNode/GreenNode-mini-7B-multilingual-v1olet |
|
layer_range: [0, 32] |
|
- model: KoboldAI/Mistral-7B-Holodeck-1 |
|
layer_range: [0, 32] |
|
merge_method: slerp |
|
base_model: GreenNode/GreenNode-mini-7B-multilingual-v1olet |
|
parameters: |
|
t: |
|
- value: 0.32 |
|
dtype: bfloat16 |
|
``` |
|
|
|
## 💻 Usage |
|
|
|
```python |
|
!pip install -qU transformers accelerate |
|
|
|
from transformers import AutoTokenizer |
|
import transformers |
|
import torch |
|
|
|
model = "son-of-man/HoloViolet-7B-test5" |
|
messages = [{"role": "user", "content": "What is a large language model?"}] |
|
|
|
tokenizer = AutoTokenizer.from_pretrained(model) |
|
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) |
|
pipeline = transformers.pipeline( |
|
"text-generation", |
|
model=model, |
|
torch_dtype=torch.float16, |
|
device_map="auto", |
|
) |
|
|
|
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) |
|
print(outputs[0]["generated_text"]) |
|
``` |