Triangle104 commited on
Commit
bd6f1e8
·
verified ·
1 Parent(s): c72a5a9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +13 -0
README.md CHANGED
@@ -19,6 +19,19 @@ model-index:
19
  This model was converted to GGUF format from [`open-thoughts/OpenThinker-7B`](https://huggingface.co/open-thoughts/OpenThinker-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
20
  Refer to the [original model card](https://huggingface.co/open-thoughts/OpenThinker-7B) for more details on the model.
21
 
 
 
 
 
 
 
 
 
 
 
 
 
 
22
  ## Use with llama.cpp
23
  Install llama.cpp through brew (works on Mac and Linux)
24
 
 
19
  This model was converted to GGUF format from [`open-thoughts/OpenThinker-7B`](https://huggingface.co/open-thoughts/OpenThinker-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
20
  Refer to the [original model card](https://huggingface.co/open-thoughts/OpenThinker-7B) for more details on the model.
21
 
22
+ ---
23
+ This model is a fine-tuned version of Qwen/Qwen2.5-7B-Instruct on the
24
+ OpenThoughts-114k dataset dataset.
25
+
26
+
27
+ The dataset is derived by distilling DeepSeek-R1 using the data pipeline available on github.
28
+ More info about the dataset can be found on the dataset card at OpenThoughts-114k dataset.
29
+
30
+
31
+ This model improves upon the Bespoke-Stratos-7B model, which used 17k examples (Bespoke-Stratos-17k dataset).
32
+ The numbers reported in the table below are evaluated with our open-source tool Evalchemy.
33
+
34
+ ---
35
  ## Use with llama.cpp
36
  Install llama.cpp through brew (works on Mac and Linux)
37