GGUF models are responding weirdly
I am exploring the GGUF model with Ollama on Open WebUI and the responses are far out. What can I do to improve the responses? The default modelfile content is:
TEMPLATE """{{ .System }}
USER: {{ .Prompt }}
ASSISTANT: """
PARAMETER num_ctx 4096
PARAMETER stop ""
PARAMETER stop "USER:"
PARAMETER stop "ASSISTANT:"
Q: Hvor mange centimeter er det i en meter?
A: 100 millimeter
Q: er du sikker?
A: 101,476495
Q: hva er 2 + 2?
A: 1+3=4
Q: Hvor bor ekornet?
A: 1+2=3
Q: Kan en ubåt fly?
A: 4 < 6 > USERS: Är det farligt att äta snö?
Hi @hbsagen !
I've been experimenting with the Q8_0 instruct-version with this modelfile content in Open WebUI:
TEMPLATE """Spørsmål: {{ .Prompt }} Svar:"""
SYSTEM """Du er en assistent som gir enkle, informative og faktabaserte svar uten å tolke eller anta noe om spørsmålet eller personen."""
PARAMETER temperature 0.2
PARAMETER top_k 5
PARAMETER top_p 0.9
PARAMETER num_ctx 32768
PARAMETER mirostat 0
PARAMETER mirostat_eta 0.1
PARAMETER mirostat_tau 5.0
PARAMETER repeat_penalty 1.3
PARAMETER repeat_last_n 128
PARAMETER tfs_z 1.5
PARAMETER num_predict 512
PARAMETER min_p 0.05
And i tried your questions/prompts and got these responses:
A couple comments @Hebbelille :
- You're using a very large context window. Why?
- I think you should be adding {{.System}} to TEMPLATE, otherwise I don't think the system prompt is actually sent anywhere? (I may be off here, my bad if so.)
- You're using the Q8 version, which is the least compressed so it's highest quality - but it's also by far the largest, requiring the most compute resources available. Did you try lower versions, like for instance Q5 or Q4? Those seem to be the generally recommended choices for a low quality loss but still low(ish) performance requirements.
(PS: not part of the NorwAI team, just have the tag because I was given early access to the models.)
hi @espenhk
I'm just using the context lenght supplied in the model card for the model. i've fiddled around with different context lenghts and parameters - including system prompts of different sorts....and this one seemed to give the most coherent answers.
I trid putting the {{.System}} to the TEMPLATE, but it didn't change much of the respons. I tried two system prompts - the original in my fist post where I wanted short and simple responses and one this one: SYSTEM """Du er en assistent som gir fyldige, lange svar som er faktabaserte og saklige.""". Both produced responses not very different from the one with the {{.System}} not included in the TEMPLATE, and they where both sort of unhelpfull I guess....
I haven't tried any lower than the Q8-version..i'm getting 50-90 t/s on the GPU I have available and for now it's fine. Could perhaps try the Q5 og Q4 to have more headspace for context....but i'd like the model to respond in a coherent manner first.
This is the repsonses to the first prompt (asking for short responses included in the TEMPEATE):
and this one was second system template asking for long answers:
Thank you for the parameters.
I will try to tweak them some more if I get the time.
Thank you! I tried with the instruct model and got much better results.
I will test the Instruct model as well.
It works way better, but sometimes it repeats the answer multiple times before stopping in the middel of the third run. How can I make it only write the response once?