Adding Evaluation Results

#3
Files changed (1) hide show
  1. README.md +14 -1
README.md CHANGED
@@ -27,4 +27,17 @@ Merge of [Open-Orca/Mistral-7B-SlimOrca](https://huggingface.co/Open-Orca/Mistra
27
  | ARC (25-shot) | |
28
  | HellaSwag (10-shot) | |
29
  | MMLU (5-shot) | |
30
- | TruthfulQA (0-shot) | |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
27
  | ARC (25-shot) | |
28
  | HellaSwag (10-shot) | |
29
  | MMLU (5-shot) | |
30
+ | TruthfulQA (0-shot) | |
31
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
32
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_PulsarAI__SlimOpenOrca-Mistral-7B-v2)
33
+
34
+ | Metric | Value |
35
+ |-----------------------|---------------------------|
36
+ | Avg. | 52.96 |
37
+ | ARC (25-shot) | 62.88 |
38
+ | HellaSwag (10-shot) | 83.41 |
39
+ | MMLU (5-shot) | 62.05 |
40
+ | TruthfulQA (0-shot) | 56.65 |
41
+ | Winogrande (5-shot) | 77.58 |
42
+ | GSM8K (5-shot) | 18.95 |
43
+ | DROP (3-shot) | 9.19 |