MLX
mistral
mc0ps commited on
Commit
b5e0518
·
verified ·
1 Parent(s): 0f2399f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -5
README.md CHANGED
@@ -4,13 +4,18 @@ tags:
4
  - mlx
5
  ---
6
 
7
- # mistral-ft-optimized-1227-4bit-mlx
8
  This model was converted to MLX format from [`OpenPipe/mistral-ft-optimized-1227`]().
9
  Refer to the [original model card](https://huggingface.co/OpenPipe/mistral-ft-optimized-1227) for more details on the model.
10
  ## Use with mlx
 
11
  ```bash
12
- pip install mlx
13
- git clone https://github.com/ml-explore/mlx-examples.git
14
- cd mlx-examples/llms/hf_llm
15
- python generate.py --model mlx-community/mistral-ft-optimized-1227-4bit-mlx --prompt "My name is"
 
 
 
 
16
  ```
 
4
  - mlx
5
  ---
6
 
7
+ # mlx-community/mistral-ft-optimized-1227-4bit-mlx
8
  This model was converted to MLX format from [`OpenPipe/mistral-ft-optimized-1227`]().
9
  Refer to the [original model card](https://huggingface.co/OpenPipe/mistral-ft-optimized-1227) for more details on the model.
10
  ## Use with mlx
11
+
12
  ```bash
13
+ pip install mlx-lm
14
+ ```
15
+
16
+ ```python
17
+ from mlx_lm import load, generate
18
+
19
+ model, tokenizer = load("mlx-community/mistral-ft-optimized-1227-4bit-mlx")
20
+ response = generate(model, tokenizer, prompt="hello", verbose=True)
21
  ```