Update README.md
Browse files
README.md
CHANGED
@@ -19,15 +19,6 @@ Merge of [Open-Orca/Mistral-7B-SlimOrca](https://huggingface.co/Open-Orca/Mistra
|
|
19 |
- [Open-Orca/Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca): 0.5
|
20 |
|
21 |
|
22 |
-
# Evaluation Results ([Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard))
|
23 |
-
|
24 |
-
| Metric | Value |
|
25 |
-
|-----------------------|-------|
|
26 |
-
| Avg. | |
|
27 |
-
| ARC (25-shot) | |
|
28 |
-
| HellaSwag (10-shot) | |
|
29 |
-
| MMLU (5-shot) | |
|
30 |
-
| TruthfulQA (0-shot) | |
|
31 |
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
32 |
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_PulsarAI__SlimOpenOrca-Mistral-7B-v2)
|
33 |
|
|
|
19 |
- [Open-Orca/Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca): 0.5
|
20 |
|
21 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
22 |
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
23 |
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_PulsarAI__SlimOpenOrca-Mistral-7B-v2)
|
24 |
|