Locutusque
commited on
fix typo
Browse files
README.md
CHANGED
@@ -19,7 +19,7 @@ datasets:
|
|
19 |
|
20 |
## Model Description
|
21 |
|
22 |
-
Hercules-2.5-Mistral-7B is a fine-tuned language model derived from Mistralai/Mistral-7B-v0.1. It is specifically designed to excel in instruction following, function calls, and conversational interactions across various scientific and technical domains. The dataset used for fine-tuning, also named Hercules-v2.5, expands upon the diverse capabilities
|
23 |
|
24 |
- Complex Instruction Following: Understanding and accurately executing multi-step instructions, even those involving specialized terminology.
|
25 |
- Function Calling: Seamlessly interpreting and executing function calls, providing appropriate input and output values.
|
@@ -83,4 +83,7 @@ Hercules-2.5-Mistral-7B is fine-tuned from the following sources:
|
|
83 |
- No model parameters were frozen.
|
84 |
- This model was trained on OpenAI's ChatML prompt format. Because this model has function calling capabilities, the prompt format is slightly different, here's what it would look like: ```<|im_start|>system\n{message}<|im_end|>\n<|im_start|>user\n{user message}<|im_end|>\n<|im_start|>call\n{function call message}<|im_end|>\n<|im_start|>function\n{function response message}<|im_end|>\n<|im_start|>assistant\n{assistant message}</s>```
|
85 |
|
86 |
-
This model was fine-tuned using the TPU-Alignment repository. https://github.com/Locutusque/TPU-Alignment
|
|
|
|
|
|
|
|
19 |
|
20 |
## Model Description
|
21 |
|
22 |
+
Hercules-2.5-Mistral-7B is a fine-tuned language model derived from Mistralai/Mistral-7B-v0.1. It is specifically designed to excel in instruction following, function calls, and conversational interactions across various scientific and technical domains. The dataset used for fine-tuning, also named Hercules-v2.5, expands upon the diverse capabilities of OpenHermes-2.5 with contributions from numerous curated datasets. This fine-tuning has hercules-v2.5 with enhanced abilities in:
|
23 |
|
24 |
- Complex Instruction Following: Understanding and accurately executing multi-step instructions, even those involving specialized terminology.
|
25 |
- Function Calling: Seamlessly interpreting and executing function calls, providing appropriate input and output values.
|
|
|
83 |
- No model parameters were frozen.
|
84 |
- This model was trained on OpenAI's ChatML prompt format. Because this model has function calling capabilities, the prompt format is slightly different, here's what it would look like: ```<|im_start|>system\n{message}<|im_end|>\n<|im_start|>user\n{user message}<|im_end|>\n<|im_start|>call\n{function call message}<|im_end|>\n<|im_start|>function\n{function response message}<|im_end|>\n<|im_start|>assistant\n{assistant message}</s>```
|
85 |
|
86 |
+
This model was fine-tuned using the TPU-Alignment repository. https://github.com/Locutusque/TPU-Alignment
|
87 |
+
|
88 |
+
# Updates
|
89 |
+
- **🔥 Earned a score of nearly 64 on Open LLM Leaderboard, outperforming most merge-free SFT mistral fine-tunes 🔥**
|