GenjiOneTrick commited on
Commit
c99307b
·
1 Parent(s): c41327a

adj: description

Browse files
Files changed (1) hide show
  1. app.py +5 -3
app.py CHANGED
@@ -2,12 +2,14 @@ import gradio as gr
2
  from gpt4all import GPT4All
3
  from huggingface_hub import hf_hub_download
4
 
5
- title = "Mistral-7B-Instruct-GGUF-Run-On-CPU-Basic"
6
 
7
  description = """
8
- 🔎 Mistral-7B-Instruct-v0.1, 4-bit quantization balanced quality gguf version, running on CPU. English Only. (Also support other languages but the quality's not good)
9
 
10
- 🔨 Running on CPU-Basic Hardware. Suggest duplicating this space to run without a queue. [GGUF format model files](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GGUF) for [Mistral AI's Mistral 7B Instruct v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1). How to insert system prompt: [Learn more](https://docs.mistral.ai/usage/guardrailing). [GitHub - gpt4all](https://github.com/nomic-ai/gpt4all) [GitHub - llama.cpp](https://github.com/ggerganov/llama.cpp)
 
 
11
  """
12
 
13
  """
 
2
  from gpt4all import GPT4All
3
  from huggingface_hub import hf_hub_download
4
 
5
+ title = "Mistral-7B-Instruct-GGUF Run On CPU-Basic Free Hardware"
6
 
7
  description = """
8
+ 🔎 [Mistral AI's Mistral 7B Instruct v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) [GGUF format model](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GGUF) , 4-bit quantization balanced quality gguf version, running on CPU. English Only (Also support other languages but the quality's not good). Using [GitHub - llama.cpp](https://github.com/ggerganov/llama.cpp) [GitHub - gpt4all](https://github.com/nomic-ai/gpt4all).
9
 
10
+ 🔨 Running on CPU-Basic free hardware. Suggest duplicating this space to run without a queue.
11
+
12
+ Mistral does not support system prompt symbol(such as "<<SYS>>") now, input your system prompt in the first message. Learn more: [Guardrailing Mistral 7B](https://docs.mistral.ai/usage/guardrailing).
13
  """
14
 
15
  """