TheBloke commited on
Commit
cf018fd
·
1 Parent(s): c1d79fe

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -7
README.md CHANGED
@@ -30,17 +30,19 @@ I have the following Vicuna 1.1 repositories available:
30
 
31
  ## How to easily download and use this model in text-generation-webui
32
 
33
- Load text-generation-webui as you normally do.
34
 
35
  1. Click the **Model tab**.
36
- 2. Under **Download custom model or LoRA**, enter this repo name: `TheBloke/vicuna-13B-1.1-GPTQ-4bit-128g`.
37
  3. Click **Download**.
38
  4. Wait until it says it's finished downloading.
39
- 5. As this is a GPTQ model, fill in the `GPTQ parameters` on the right: `Bits = 4`, `Groupsize = 128`, `model_type = Llama`
40
- 6. Now click the **Refresh** icon next to **Model** in the top left.
41
- 7. In the **Model drop-down**: choose this model: `vicuna-13B-1.1-GPTQ-4bit-128g`.
42
- 8. Click **Reload the Model** in the top right.
43
- 9. Once it says it's loaded, click the **Text Generation tab** and enter a prompt!
 
 
44
 
45
  ## GIBBERISH OUTPUT
46
 
 
30
 
31
  ## How to easily download and use this model in text-generation-webui
32
 
33
+ Open the text-generation-webui UI as normal.
34
 
35
  1. Click the **Model tab**.
36
+ 2. Under **Download custom model or LoRA**, enter `TheBloke/vicuna-13B-1.1-GPTQ-4bit-128g`.
37
  3. Click **Download**.
38
  4. Wait until it says it's finished downloading.
39
+ 5. Click the **Refresh** icon next to **Model** in the top left.
40
+ 6. In the **Model drop-down**: choose the model you just downloaded, eg `vicuna-13B-1.1-GPTQ-4bit-128g`.
41
+ 7. If you see an error in the bottom right, ignore it - it's temporary.
42
+ 8. Check that the `GPTQ parameters` are correct on the right: `Bits = 4`, `Groupsize = 128`, `model_type = Llama`
43
+ 9. Click **Save settings for this model** in the top right.
44
+ 10. Click **Reload the Model** in the top right.
45
+ 11. Once it says it's loaded, click the **Text Generation tab** and enter a prompt!
46
 
47
  ## GIBBERISH OUTPUT
48