library_name: transformers | |
license: apache-2.0 | |
base_model: NousResearch/Hermes-2-Pro-Llama-3-8B | |
language: | |
- en | |
datasets: | |
- teknium/OpenHermes-2.5 | |
tags: | |
- gpt | |
- llm | |
- large language model | |
- nous-research | |
- nous-hermes | |
- Llama-3 | |
- instruct | |
- finetune | |
- chatml | |
- DPO | |
- RLHF | |
- gpt4 | |
- synthetic data | |
- distillation | |
- function calling | |
- json mode | |
- axolotl | |
- merges | |
thumbnail: >- | |
https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/HQnQmNM1L3KXGhp0wUzHH.png | |
pipeline_tag: text-generation | |
# mlx-community/Hermes-2-Theta-Llama-3-8B-8bit | |
Model was converted to MLX format from [`NousResearch/Hermes-2-Theta-Llama-3-8B`](https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B) using mlx-lm version **0.14.3**. | |
Converted & uploaded by: @ucheog ([Uche Ogbuji](https://ucheog.carrd.co/)). | |
Refer to the [original model card](https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B) for more details on the model. | |
## Use with mlx | |
```sh | |
pip install mlx-lm | |
``` | |
```python | |
from mlx_lm import load, generate | |
model, tokenizer = load('mlx-community/Hermes-2-Theta-Llama-3-8B-8bit') | |
response = generate(model, tokenizer, prompt='Hello! Tell me something good.', verbose=True) | |
``` | |
## Conversion command | |
```sh | |
python -m mlx_lm.convert --hf-path NousResearch/Hermes-2-Theta-Llama-3-8B --mlx-path ~/.local/share/models/mlx/Hermes-2-Theta-Llama-3-8B -q --q-bits 8 | |
``` | |