Fixed README file.
Browse files
README.md
CHANGED
@@ -175,7 +175,7 @@ First [build](https://github.com/ggerganov/llama.cpp/blob/master/docs/build.md)
|
|
175 |
$ llama-server --hf-repo mirekphd/gte-Qwen2-1.5B-instruct-Q8_0 --hf-file gte-Qwen2-1.5B-instruct-Q8_0-00001-of-00002.gguf --n-gpu-layers 0 --ctx-size 131072 --embeddings
|
176 |
|
177 |
# using a previously downloaded local model file(s)
|
178 |
-
$ llama-server --model <path-to-hf-models>/mirekphd/gte-Qwen2-1.5B-instruct-Q8_0/gte-Qwen2-1.5B-instruct-Q8_0-00001-of-
|
179 |
|
180 |
```
|
181 |
|
|
|
175 |
$ llama-server --hf-repo mirekphd/gte-Qwen2-1.5B-instruct-Q8_0 --hf-file gte-Qwen2-1.5B-instruct-Q8_0-00001-of-00002.gguf --n-gpu-layers 0 --ctx-size 131072 --embeddings
|
176 |
|
177 |
# using a previously downloaded local model file(s)
|
178 |
+
$ llama-server --model <path-to-hf-models>/mirekphd/gte-Qwen2-1.5B-instruct-Q8_0/gte-Qwen2-1.5B-instruct-Q8_0-00001-of-00002.gguf --n-gpu-layers 0 --ctx-size 131072 --embeddings
|
179 |
|
180 |
```
|
181 |
|