flydust commited on
Commit
cf55cfe
·
verified ·
1 Parent(s): bab3aa1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -42,7 +42,7 @@ The overall performance is even better than the official Llama-3-8B-Instruct Mod
42
  - **Alpaca Eval 2 (vs GPT-4-Turbo-1106): 38.52 (LC), 38.47 (WR)**
43
  - **Alpaca Eval 2 (vs Llama-3-8B-Instruct): 69.37 (LC), 70.05 (WR)**
44
  - **Arena Hard: 32.4**
45
- - **WildBench: 39.3 (Best <30B Model! 🏆)**
46
  - **Zero-Eval GSM: 54.62**
47
 
48
  ## Model Performance
@@ -81,7 +81,7 @@ We compare our Llama-3-8B-Magpie-Align with official and other **open-aligned LL
81
 
82
  **Conversation Template**: Please use Llama 3 **official chat template** for the best performance.
83
 
84
- **How to use it?** Please check the official [Llama 3 repository](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct#how-to-use) for detailed instructions. Simply replace the original `model_id` with `Magpie-Align/Llama-3-8B-Magpie-Align-SFT-v1.0`.
85
 
86
  The detailed training pipeline is as follows.
87
 
 
42
  - **Alpaca Eval 2 (vs GPT-4-Turbo-1106): 38.52 (LC), 38.47 (WR)**
43
  - **Alpaca Eval 2 (vs Llama-3-8B-Instruct): 69.37 (LC), 70.05 (WR)**
44
  - **Arena Hard: 32.4**
45
+ - **WildBench: 39.3 ((was) Best <30B Model! 🏆)**
46
  - **Zero-Eval GSM: 54.62**
47
 
48
  ## Model Performance
 
81
 
82
  **Conversation Template**: Please use Llama 3 **official chat template** for the best performance.
83
 
84
+ **How to use it?** Please check the official [Llama 3 repository](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct#how-to-use) for detailed instructions. Simply replace the original `model_id` with `Magpie-Align/Llama-3-8B-Magpie-Align-v0.1`.
85
 
86
  The detailed training pipeline is as follows.
87