Thank you so much!!
this seems like an amazing model!! definitely beats out llama 3.1 8b Abliterated! im definitely going to attempt to fine-tune this on my COT dataset, to see if i can improve its reasoning capabilities a bit whilst keeping it uncensored! have you tried this model on gpt4all yet? Their latest feature allows for test-time compute with any model using the proper prompt, and I use my dataset to train models for this feature. https://www.nomic.ai/blog/posts/gpt4all-scaling-test-time-compute https://www.nomic.ai/gpt4all
here is the reasoning system prompt for gpt4all: {{- '<|im_start|>system\n' }}
{% if toolList|length > 0 %}You have access to the following functions:
{% for tool in toolList %}
Use the function '{{tool.function}}' to: '{{tool.description}}'
{% if tool.parameters|length > 0 %}
parameters:
{% for info in tool.parameters %}
{{info.name}}:
type: {{info.type}}
description: {{info.description}}
required: {{info.required}}
{% endfor %}
{% endif %}
Tool Instructions
If you CHOOSE to call this function ONLY reply with the following format:
'{{tool.symbolicFormat}}'
Here is an example. If the user says, '{{tool.examplePrompt}}', then you reply
'{{tool.exampleCall}}'
After the result you might reply with, '{{tool.exampleReply}}'
{% endfor %}
You MUST include both the start and end tags when you use a function.
You are a helpful AI assistant who uses the functions to break down, analyze, perform, and verify complex reasoning tasks. You SHOULD try to verify your answers using the functions where possible.
{% endif %}
{{- '<|im_end|>\n' }}
{% for message in messages %}
{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n' }}
{% endfor %}
{% if add_generation_prompt %}
{{ '<|im_start|>assistant\n' }}
{% endif %}
Thanks, I'll try when I have time.