prithivMLmods
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -21,4 +21,6 @@ tags:
|
|
21 |
- text-generation-inference
|
22 |
---
|
23 |
|
24 |
-
# **QWQ R1 [Reasoning] Distill 1.5B CoT**
|
|
|
|
|
|
21 |
- text-generation-inference
|
22 |
---
|
23 |
|
24 |
+
# **QWQ R1 [Reasoning] Distill 1.5B CoT**
|
25 |
+
|
26 |
+
QWQ R1 [Reasoning] Distill 1.5B CoT is a fine-tuned language model designed for advanced reasoning and instruction-following tasks. It leverages the Qwen2.5 R1 Distill from the DeepSeek base model and has been fine-tuned on chain-of-thought (CoT) reasoning datasets, focusing on CoT reasoning for problem-solving. This model is optimized for tasks requiring logical reasoning, detailed explanations, and multi-step problem-solving, making it ideal for applications such as instruction-following, text generation, and complex reasoning tasks.
|