waadarsh commited on
Commit
4cd3d91
·
verified ·
1 Parent(s): 4c92df1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -7
README.md CHANGED
@@ -6,12 +6,15 @@ model_creator: Mistral AI
6
  model_name: mistral_7b_magnite_finetuned
7
  model_type: mistral
8
  pipeline_tag: text-generation
9
- prompt_template: '<s>[INST]{prompt} [/INST]
10
-
11
- '
12
  quantized_by: waadarsh
13
  tags:
14
  - finetuned
 
 
 
 
15
  ---
16
 
17
  # Mistral 7B Magnite Finetuned - GGUF
@@ -99,7 +102,7 @@ The following clients/libraries will automatically download models for you, prov
99
 
100
  ### In `text-generation-webui`
101
 
102
- Under Download Model, you can enter the model repo: TheBloke/Mistral-7B-Instruct-v0.1-GGUF and below it, a specific filename to download, such as: mistral-7b-instruct-v0.1.Q4_K_M.gguf.
103
 
104
  Then click Download.
105
 
@@ -114,7 +117,7 @@ pip3 install huggingface-hub
114
  Then you can download any individual model file to the current directory, at high speed, with a command like this:
115
 
116
  ```shell
117
- huggingface-cli download TheBloke/Mistral-7B-Instruct-v0.1-GGUF mistral-7b-instruct-v0.1.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
118
  ```
119
 
120
  <details>
@@ -194,7 +197,7 @@ CT_METAL=1 pip install ctransformers --no-binary ctransformers
194
  from ctransformers import AutoModelForCausalLM
195
 
196
  # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
197
- llm = AutoModelForCausalLM.from_pretrained("TheBloke/Mistral-7B-Instruct-v0.1-GGUF", model_file="mistral-7b-instruct-v0.1.Q4_K_M.gguf", model_type="mistral", gpu_layers=50)
198
 
199
  print(llm("AI is going to"))
200
  ```
@@ -209,4 +212,4 @@ Here are guides on using llama-cpp-python and ctransformers with LangChain:
209
  <!-- README_GGUF.md-how-to-run end -->
210
 
211
  <!-- footer start -->
212
- <!-- 200823 -->
 
6
  model_name: mistral_7b_magnite_finetuned
7
  model_type: mistral
8
  pipeline_tag: text-generation
9
+ prompt_template: |
10
+ <s>[INST]{prompt} [/INST]
 
11
  quantized_by: waadarsh
12
  tags:
13
  - finetuned
14
+ datasets:
15
+ - waadarsh/magnite-dataset
16
+ language:
17
+ - en
18
  ---
19
 
20
  # Mistral 7B Magnite Finetuned - GGUF
 
102
 
103
  ### In `text-generation-webui`
104
 
105
+ Under Download Model, you can enter the model repo: waadarsh/mistral_7b_magnite_finetuned-GGUF and below it, a specific filename to download, such as: mistral-7b-instruct-v0.1.Q4_K_M.gguf.
106
 
107
  Then click Download.
108
 
 
117
  Then you can download any individual model file to the current directory, at high speed, with a command like this:
118
 
119
  ```shell
120
+ huggingface-cli download waadarsh/mistral_7b_magnite_finetuned-GGUF mistral_7b_magnite_finetuned.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
121
  ```
122
 
123
  <details>
 
197
  from ctransformers import AutoModelForCausalLM
198
 
199
  # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
200
+ llm = AutoModelForCausalLM.from_pretrained("waadarsh/mistral_7b_magnite_finetuned-GGUF", model_file="mistral_7b_magnite_finetuned.Q4_K_M.gguf", model_type="mistral", gpu_layers=50)
201
 
202
  print(llm("AI is going to"))
203
  ```
 
212
  <!-- README_GGUF.md-how-to-run end -->
213
 
214
  <!-- footer start -->
215
+ <!-- 200823 -->