Chicka-Mixtral-3x7b / README.md
Chickaboo's picture
Update README.md
2043b3e verified
|
raw
history blame
1.89 kB
metadata
license: mit

Model Description

This model is a mixture of experts merge consisting of 3 mistral based models:

base model, openchat/openchat-3.5-0106

code expert, beowolx/CodeNinja-1.0-OpenChat-7B

math expert, meta-math/MetaMath-Mistral-7B

This is the config used in the merging process:

base_model: openchat/openchat-3.5-0106
experts:
  - source_model: openchat/openchat-3.5-0106
    positive_prompts:
    - "chat"
    - "assistant"
    - "tell me"
    - "explain"
    - "I want"
  - source_model: beowolx/CodeNinja-1.0-OpenChat-7B
    positive_prompts:
    - "code"
    - "python"
    - "javascript"
    - "programming"
    - "algorithm"
    - "C#"
    - "C++"
    - "debug"
    - "runtime"
    - "html"
    - "command"
    - "nodejs"
  - source_model: meta-math/MetaMath-Mistral-7B
    positive_prompts:
    - "reason"
    - "math"
    - "mathematics"
    - "solve"
    - "count"
    - "calculate"
    - "arithmetic"
    - "algebra"

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

device = "cuda" # the device to load the model onto

model = AutoModelForCausalLM.from_pretrained("Chickaboo/Chicka-Mistral-4x7b")
tokenizer = AutoTokenizer.from_pretrained("Chickaboo/Chicka-Mistral-4x7b")

messages = [
    {"role": "user", "content": "What is your favourite condiment?"},
    {"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"},
    {"role": "user", "content": "Do you have mayonnaise recipes?"}
]

encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")

model_inputs = encodeds.to(device)
model.to(device)

generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])