totally-not-an-llm commited on
Commit
4717edd
·
1 Parent(s): 03b190f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -9
README.md CHANGED
@@ -10,14 +10,6 @@ Introducing EverythingLM, a llama-2 based, general-purpose 13b model with 16k co
10
 
11
  The model is completely uncensored.
12
 
13
- ### GGML quants:
14
- soon
15
-
16
- Make sure to use correct rope scaling settings:
17
- `-c 16384 --rope-freq-base 10000 --rope-freq-scale 0.25`
18
- ### GPTQ quants:
19
- soon
20
-
21
  ### Notable features:
22
  - Automatically triggered CoT reasoning.
23
  - Verbose and detailed replies.
@@ -38,4 +30,4 @@ Training took about 2.5 hours using QLoRa on 1xA100, so this model can be recrea
38
  ### Future plans:
39
  - Native finetune.
40
  - Other model sizes.
41
- - Test some model merges using this model. (Specifically OpenOrca and Platypus models)
 
10
 
11
  The model is completely uncensored.
12
 
 
 
 
 
 
 
 
 
13
  ### Notable features:
14
  - Automatically triggered CoT reasoning.
15
  - Verbose and detailed replies.
 
30
  ### Future plans:
31
  - Native finetune.
32
  - Other model sizes.
33
+ - Test some model merges using this model.