TheBloke commited on
Commit
62fff1f
·
1 Parent(s): 774bdea

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -1
README.md CHANGED
@@ -9,6 +9,15 @@ license: cc-by-nc-4.0
9
  model_creator: TFLai
10
  model_name: Luban Marcoroni 13B v3
11
  model_type: llama
 
 
 
 
 
 
 
 
 
12
  quantized_by: TheBloke
13
  ---
14
 
@@ -44,8 +53,8 @@ Multiple GPTQ parameter permutations are provided; see Provided Files below for
44
  <!-- repositories-available start -->
45
  ## Repositories available
46
 
 
47
  * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Luban-Marcoroni-13B-v3-GPTQ)
48
- * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Luban-Marcoroni-13B-v3-GGUF)
49
  * [TFLai's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TFLai/Luban-Marcoroni-13B-v3)
50
  <!-- repositories-available end -->
51
 
 
9
  model_creator: TFLai
10
  model_name: Luban Marcoroni 13B v3
11
  model_type: llama
12
+ prompt_template: '### Instruction:
13
+
14
+
15
+ {prompt}
16
+
17
+
18
+ ### Response:
19
+
20
+ '
21
  quantized_by: TheBloke
22
  ---
23
 
 
53
  <!-- repositories-available start -->
54
  ## Repositories available
55
 
56
+ * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Luban-Marcoroni-13B-v3-AWQ)
57
  * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Luban-Marcoroni-13B-v3-GPTQ)
 
58
  * [TFLai's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TFLai/Luban-Marcoroni-13B-v3)
59
  <!-- repositories-available end -->
60