cgus commited on
Commit
484619e
·
verified ·
1 Parent(s): a109f5e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +30 -1
README.md CHANGED
@@ -3,12 +3,41 @@ license: apache-2.0
3
  language:
4
  - en
5
  pipeline_tag: text-generation
6
- inference: true
7
  tags:
8
  - pytorch
9
  - mistral
10
  - finetuned
11
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12
  # Mistral 7B - Holodeck
13
  ## Model Description
14
  Mistral 7B-Holodeck is a finetune created using Mistral's 7B model.
 
3
  language:
4
  - en
5
  pipeline_tag: text-generation
6
+ inference: false
7
  tags:
8
  - pytorch
9
  - mistral
10
  - finetuned
11
  ---
12
+ # Mistral 7B - Holodeck exl2
13
+
14
+ Original model: [Mistral-7B-Holodeck-1](https://huggingface.co/KoboldAI/Mistral-7B-Holodeck-1)
15
+ Model creator: [KoboldAI](https://huggingface.co/KoboldAI)
16
+
17
+ ## Quants
18
+ [4bpw-h6 (main)](https://huggingface.co/cgus/Mistral-7B-Holodeck-1-exl2/tree/main)
19
+ [4.25bpw-h6](https://huggingface.co/cgus/Mistral-7B-Holodeck-1-exl2/tree/4.25bpw-h6)
20
+ [4.65bpw-h6](https://huggingface.co/cgus/Mistral-7B-Holodeck-1-exl2/tree/4.65bpw-h6)
21
+ [5bpw-h6](https://huggingface.co/cgus/Mistral-7B-Holodeck-1-exl2/tree/5bpw-h6)
22
+ [6bpw-h6](https://huggingface.co/cgus/Mistral-7B-Holodeck-1-exl2/tree/6bpw-h6)
23
+ [8bpw-h8](https://huggingface.co/cgus/Mistral-7B-Holodeck-1-exl2/tree/8bpw-h8)
24
+
25
+ ## Quantization notes
26
+
27
+ Made with exllamav2 0.0.15 with the default dataset.
28
+
29
+ ## How to run
30
+
31
+ This quantization method uses GPU and requires Exllamav2 loader which can be found in following applications:
32
+
33
+ [Text Generation Webui](https://github.com/oobabooga/text-generation-webui)
34
+
35
+ [KoboldAI](https://github.com/henk717/KoboldAI)
36
+
37
+ [ExUI](https://github.com/turboderp/exui)
38
+
39
+ # Original card
40
+
41
  # Mistral 7B - Holodeck
42
  ## Model Description
43
  Mistral 7B-Holodeck is a finetune created using Mistral's 7B model.