Update README.md
Browse files
README.md
CHANGED
@@ -34,7 +34,7 @@ Phi-4-MedIT-10B-o1 is a specialized large language model pruned for efficiency a
|
|
34 |
|
35 |
- **Pruned Model:**
|
36 |
- Derived from the **phi-4 14B model**, reduced to 10B parameters using the **MKA2G pruning technique**.
|
37 |
-
- For details on the pruning methodology, see [MedITSolutionsKurman/llama-pruning](https://github.com/
|
38 |
|
39 |
- **Reasoning SFT:**
|
40 |
- Fine-tuned with a **single epoch of reasoning-specific supervised fine-tuning (SFT)** for optimized reasoning tasks.
|
|
|
34 |
|
35 |
- **Pruned Model:**
|
36 |
- Derived from the **phi-4 14B model**, reduced to 10B parameters using the **MKA2G pruning technique**.
|
37 |
+
- For details on the pruning methodology, see [MedITSolutionsKurman/llama-pruning](https://github.com/MedITSolutionsKurman/llama-pruning) on GitHub.
|
38 |
|
39 |
- **Reasoning SFT:**
|
40 |
- Fine-tuned with a **single epoch of reasoning-specific supervised fine-tuning (SFT)** for optimized reasoning tasks.
|