TheBloke commited on
Commit
255acd6
·
1 Parent(s): 1bdf0bc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -2
README.md CHANGED
@@ -71,9 +71,14 @@ Please note that these GGMLs are **not compatible with llama.cpp, or currently w
71
  * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/minotaur-15B-GGML)
72
  * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/openaccess-ai-collective/minotaur-15b)
73
 
74
- ## A note regarding context length
75
 
76
- it is currently untested as to whether the 8K context is compatible with available clients/libraries such as KoboldCpp, ctransformers, etc.
 
 
 
 
 
77
 
78
  If you have feedback on this, please let me know.
79
 
 
71
  * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/minotaur-15B-GGML)
72
  * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/openaccess-ai-collective/minotaur-15b)
73
 
74
+ ## A note regarding context length: 8K
75
 
76
+ It is confirmed that the 8K context of this model works in KoboldCpp, if you manually set max context to 8K by adjusting the text box above the slider:
77
+ ![.](https://s3.amazonaws.com/moonup/production/uploads/63cd4b6d1c8a5d1d7d76a778/LcoIOa7YdDZa-R-R4BWYw.png)
78
+
79
+ (set it to 8192 at most)
80
+
81
+ It is currently unknown as to whether it is compatible with other clients.
82
 
83
  If you have feedback on this, please let me know.
84