Text Generation
Transformers
Safetensors
mixtral
Mixture of Experts
frankenmoe
Merge
mergekit
lazymergekit
Locutusque/TinyMistral-248M-v2
Locutusque/TinyMistral-248M-v2.5
Locutusque/TinyMistral-248M-v2.5-Instruct
jtatman/tinymistral-v2-pycoder-instruct-248m
Felladrin/TinyMistral-248M-SFT-v4
Locutusque/TinyMistral-248M-v2-Instruct
text-generation-inference
Inference Endpoints
license: apache-2.0 | |
tags: | |
- moe | |
- frankenmoe | |
- merge | |
- mergekit | |
- lazymergekit | |
- Locutusque/TinyMistral-248M-v2 | |
- Locutusque/TinyMistral-248M-v2.5 | |
- Locutusque/TinyMistral-248M-v2.5-Instruct | |
- jtatman/tinymistral-v2-pycoder-instruct-248m | |
- Felladrin/TinyMistral-248M-SFT-v4 | |
- Locutusque/TinyMistral-248M-v2-Instruct | |
base_model: | |
- Locutusque/TinyMistral-248M-v2 | |
- Locutusque/TinyMistral-248M-v2.5 | |
- Locutusque/TinyMistral-248M-v2.5-Instruct | |
- jtatman/tinymistral-v2-pycoder-instruct-248m | |
- Felladrin/TinyMistral-248M-SFT-v4 | |
- Locutusque/TinyMistral-248M-v2-Instruct | |
# TinyMistral-6x248M | |
TinyMistral-6x248M is a Mixure of Experts (MoE) made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): | |
* [Locutusque/TinyMistral-248M-v2](https://huggingface.co/Locutusque/TinyMistral-248M-v2) | |
* [Locutusque/TinyMistral-248M-v2.5](https://huggingface.co/Locutusque/TinyMistral-248M-v2.5) | |
* [Locutusque/TinyMistral-248M-v2.5-Instruct](https://huggingface.co/Locutusque/TinyMistral-248M-v2.5-Instruct) | |
* [jtatman/tinymistral-v2-pycoder-instruct-248m](https://huggingface.co/jtatman/tinymistral-v2-pycoder-instruct-248m) | |
* [Felladrin/TinyMistral-248M-SFT-v4](https://huggingface.co/Felladrin/TinyMistral-248M-SFT-v4) | |
* [Locutusque/TinyMistral-248M-v2-Instruct](https://huggingface.co/Locutusque/TinyMistral-248M-v2-Instruct) | |
## 🧩 Configuration | |
```yaml | |
base_model: Locutusque/TinyMistral-248M-v2.5 | |
experts: | |
- source_model: Locutusque/TinyMistral-248M-v2 | |
positive_prompts: | |
- "An emerging" | |
- "assistant" | |
- "TITLE" | |
- "begin" | |
- source_model: Locutusque/TinyMistral-248M-v2.5 | |
positive_prompts: | |
- "Python" | |
- "C++" | |
- "AI" | |
- "textbook" | |
- source_model: Locutusque/TinyMistral-248M-v2.5-Instruct | |
positive_prompts: | |
- "chemistry" | |
- "biology" | |
- "physics" | |
- "math" | |
- "history" | |
- "code" | |
- source_model: jtatman/tinymistral-v2-pycoder-instruct-248m | |
positive_prompts: | |
- "code" | |
- "python" | |
- "programming" | |
- "algorithm" | |
- source_model: Felladrin/TinyMistral-248M-SFT-v4 | |
positive_prompts: | |
- "Escreba" | |
- "Voici" | |
- "Para" | |
- "Cuales" | |
- "Welche" | |
- "If you had to imagine" | |
- source_model: Locutusque/TinyMistral-248M-v2-Instruct | |
positive_prompts: | |
- "Write an essay" | |
- "What are" | |
- "instruct" | |
- "How does" | |
- "Identify the" | |
``` | |
## 💻 Usage | |
```python | |
!pip install -qU transformers bitsandbytes accelerate | |
from transformers import AutoTokenizer | |
import transformers | |
import torch | |
model = "M4-ai/TinyMistral-6x248M" | |
tokenizer = AutoTokenizer.from_pretrained(model) | |
pipeline = transformers.pipeline( | |
"text-generation", | |
model=model, | |
model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True}, | |
) | |
messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}] | |
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) | |
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) | |
print(outputs[0]["generated_text"]) | |
``` |