Felprot75's picture
f6ee87854d72cd0c827607a576f07ccb8837332e7ea1bfa01e3cf376de5e433b
ab9ba03 verified
|
raw
history blame
1.31 kB
---
license: llama3.1
base_model: cognitivecomputations/dolphin-2.9.4-llama3.1-8b
tags:
- generated_from_trainer
- mlx
datasets:
- cognitivecomputations/Dolphin-2.9
- m-a-p/CodeFeedback-Filtered-Instruction
- cognitivecomputations/dolphin-coder
- cognitivecomputations/samantha-data
- microsoft/orca-math-word-problems-200k
- mlabonne/FineTome-100k
- arcee/agent_data
- PawanKrd/math-gpt-4o-200k
- cognitivecomputations/SystemChat-2.0
---
# mlx-community/dolphin-2.9.4-llama3.1-8b-4bit
The Model [mlx-community/dolphin-2.9.4-llama3.1-8b-4bit](https://huggingface.co/mlx-community/dolphin-2.9.4-llama3.1-8b-4bit) was converted to MLX format from [cognitivecomputations/dolphin-2.9.4-llama3.1-8b](https://huggingface.co/cognitivecomputations/dolphin-2.9.4-llama3.1-8b) using mlx-lm version **0.19.1**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/dolphin-2.9.4-llama3.1-8b-4bit")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```