ehristoforu commited on
Commit
e0939f8
·
verified ·
1 Parent(s): 668ce35

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +86 -0
README.md ADDED
@@ -0,0 +1,86 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ - fr
5
+ - ru
6
+ - de
7
+ - ja
8
+ - ko
9
+ - zh
10
+ - it
11
+ - uk
12
+ - multilingual
13
+ - code
14
+ library_name: transformers
15
+ tags:
16
+ - mistral
17
+ - gistral
18
+ - gistral-16b
19
+ - multilingual
20
+ - code
21
+ - 128k
22
+ - metamath
23
+ - grok-1
24
+ - anthropic
25
+ - openhermes
26
+ - instruct
27
+ - merge
28
+ - llama-cpp
29
+ - gguf-my-repo
30
+ base_model:
31
+ - Gaivoronsky/Mistral-7B-Saiga
32
+ - snorkelai/Snorkel-Mistral-PairRM-DPO
33
+ - OpenBuddy/openbuddy-mistral2-7b-v20.3-32k
34
+ - meta-math/MetaMath-Mistral-7B
35
+ - HuggingFaceH4/mistral-7b-grok
36
+ - HuggingFaceH4/mistral-7b-anthropic
37
+ - NousResearch/Yarn-Mistral-7b-128k
38
+ - ajibawa-2023/Code-Mistral-7B
39
+ - SherlockAssistant/Mistral-7B-Instruct-Ukrainian
40
+ datasets:
41
+ - HuggingFaceH4/grok-conversation-harmless
42
+ - HuggingFaceH4/ultrachat_200k
43
+ - HuggingFaceH4/ultrafeedback_binarized_fixed
44
+ - HuggingFaceH4/cai-conversation-harmless
45
+ - meta-math/MetaMathQA
46
+ - emozilla/yarn-train-tokenized-16k-mistral
47
+ - snorkelai/Snorkel-Mistral-PairRM-DPO-Dataset
48
+ - microsoft/orca-math-word-problems-200k
49
+ - m-a-p/Code-Feedback
50
+ - teknium/openhermes
51
+ - lksy/ru_instruct_gpt4
52
+ - IlyaGusev/ru_turbo_saiga
53
+ - IlyaGusev/ru_sharegpt_cleaned
54
+ - IlyaGusev/oasst1_ru_main_branch
55
+ pipeline_tag: text-generation
56
+ ---
57
+
58
+ # ehristoforu/Gistral-16B-Q4_K_M-GGUF
59
+ This model was converted to GGUF format from [`ehristoforu/Gistral-16B`](https://huggingface.co/ehristoforu/Gistral-16B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
60
+ Refer to the [original model card](https://huggingface.co/ehristoforu/Gistral-16B) for more details on the model.
61
+ ## Use with llama.cpp
62
+
63
+ Install llama.cpp through brew.
64
+
65
+ ```bash
66
+ brew install ggerganov/ggerganov/llama.cpp
67
+ ```
68
+ Invoke the llama.cpp server or the CLI.
69
+
70
+ CLI:
71
+
72
+ ```bash
73
+ llama-cli --hf-repo ehristoforu/Gistral-16B-Q4_K_M-GGUF --model gistral-16b.Q4_K_M.gguf -p "The meaning to life and the universe is"
74
+ ```
75
+
76
+ Server:
77
+
78
+ ```bash
79
+ llama-server --hf-repo ehristoforu/Gistral-16B-Q4_K_M-GGUF --model gistral-16b.Q4_K_M.gguf -c 2048
80
+ ```
81
+
82
+ Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
83
+
84
+ ```
85
+ git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m gistral-16b.Q4_K_M.gguf -n 128
86
+ ```