Triangle104/Dolphin3.0-Llama3.1-8B-Q5_K_M-GGUF
This model was converted to GGUF format from cognitivecomputations/Dolphin3.0-Llama3.1-8B
using llama.cpp via the ggml.ai's GGUF-my-repo space.
Refer to the original model card for more details on the model.
Model details:
Part of the Dolphin 3.0 Collection
Curated and trained by Eric Hartford, Ben Gitter, BlouseJury and Cognitive Computations
Discord Discord: https://discord.gg/cognitivecomputations
Sponsors
Our appreciation for the generous sponsors of Dolphin 3.0:
Crusoe Cloud - provided 16x L40s for training and evals
Akash - provided on-demand 8x H100 for training
Lazarus - provided 16x H100 for training
Cerebras - provided excellent and fast inference services for data labeling
Andreessen Horowitz - provided a grant that make Dolphin 1.0 possible and enabled me to bootstrap my homelab
What is Dolphin?
Dolphin 3.0 is the next generation of the Dolphin series of instruct-tuned models. Designed to be the ultimate general purpose local model, enabling coding, math, agentic, function calling, and general use cases.
Dolphin aims to be a general purpose model, similar to the models behind ChatGPT, Claude, Gemini. But these models present problems for businesses seeking to include AI in their products.
They maintain control of the system prompt, deprecating and changing things as they wish, often causing software to break.
They maintain control of the model versions, sometimes changing things silently, or deprecating older models that your business relies on.
They maintain control of the alignment, and in particular the alignment is one-size-fits all, not tailored to the application.
They can see all your queries and they can potentially use that data in ways you wouldn't want. Dolphin, in contrast, is steerable and gives control to the system owner. You set the system prompt. You decide the alignment. You have control of your data. Dolphin does not impose its ethics or guidelines on you. You are the one who decides the guidelines.
Dolphin belongs to YOU, it is your tool, an extension of your will. Just as you are personally responsible for what you do with a knife, gun, fire, car, or the internet, you are the creator and originator of any content you generate with Dolphin.
https://erichartford.com/uncensored-models
Chat Template
We use ChatML for the chat template.
<|im_start|>system You are Dolphin, a helpful AI assistant.<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant
System Prompt
In Dolphin, the system prompt is what you use to set the tone and alignment of the responses. You can set a character, a mood, rules for its behavior, and it will try its best to follow them.
Make sure to set the system prompt in order to set the tone and guidelines for the responses - Otherwise, it will act in a default way that might not be what you want.
Example use of system prompt:
<|im_start|>system You are Dolphin, a golang coding assistant. you only code in golang. If the user requests any other programming language, return the solution in golang instead.<|im_end|> <|im_start|>user Please implement A* using python<|im_end|> <|im_start|>assistant
Sample Outputs
How to use
There are many ways to use a huggingface model including:
ollama
LM Studio
Huggingface Transformers library
vllm
sglang
tgi
ollama
Install ollama
ollama run hf.co/cognitivecomputations/Dolphin3.0-Llama3.1-8B-GGUF:Q4_0
/set system <your system prompt>
Appreciation
Respect and thanks to the creators of the open source datasets that were used:
OpenCoder-LLM (opc-sft-stage1, opc-sft-stage2)
microsoft (orca-agentinstruct-1M-v1, orca-math-word-problems-200k)
NousResearch (hermes-function-calling-v1)
AI-MO (NuminaMath-CoT, NuminaMath-TIR)
allenai (tulu-3-sft-mixture)
HuggingFaceTB (smoltalk)
m-a-p (CodeFeedback-Filtered-Instruction, Code-Feedback)
Special thanks to
Meta, Qwen, and OpenCoder, who wrote papers and published models that were instrumental in creating Dolphin 3.0.
RLHFlow for the excellent reward model used to filter the datasets
Deepseek, for the ridiculously fast Deepseek-V3 that we used to augment the data.
Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
brew install llama.cpp
Invoke the llama.cpp server or the CLI.
CLI:
llama-cli --hf-repo Triangle104/Dolphin3.0-Llama3.1-8B-Q5_K_M-GGUF --hf-file dolphin3.0-llama3.1-8b-q5_k_m.gguf -p "The meaning to life and the universe is"
Server:
llama-server --hf-repo Triangle104/Dolphin3.0-Llama3.1-8B-Q5_K_M-GGUF --hf-file dolphin3.0-llama3.1-8b-q5_k_m.gguf -c 2048
Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
git clone https://github.com/ggerganov/llama.cpp
Step 2: Move into the llama.cpp folder and build it with LLAMA_CURL=1
flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
cd llama.cpp && LLAMA_CURL=1 make
Step 3: Run inference through the main binary.
./llama-cli --hf-repo Triangle104/Dolphin3.0-Llama3.1-8B-Q5_K_M-GGUF --hf-file dolphin3.0-llama3.1-8b-q5_k_m.gguf -p "The meaning to life and the universe is"
or
./llama-server --hf-repo Triangle104/Dolphin3.0-Llama3.1-8B-Q5_K_M-GGUF --hf-file dolphin3.0-llama3.1-8b-q5_k_m.gguf -c 2048
- Downloads last month
- 1
Model tree for Triangle104/Dolphin3.0-Llama3.1-8B-Q5_K_M-GGUF
Base model
meta-llama/Llama-3.1-8B