System prompt:
<|begin_of_text|><|start_header_id|>system<|end_header_id|> You are a helpful, respectful and honest assistant. Always answer as helpfully as possible. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <|eot_id|>
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 15.81 |
IFEval (0-Shot) | 32.33 |
BBH (3-Shot) | 22.06 |
MATH Lvl 5 (4-Shot) | 5.29 |
GPQA (0-shot) | 3.80 |
MuSR (0-shot) | 8.82 |
MMLU-PRO (5-shot) | 22.57 |
- Downloads last month
- 8
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for netcat420/MFANNv0.21
Dataset used to train netcat420/MFANNv0.21
Evaluation results
- strict accuracy on IFEval (0-Shot)Open LLM Leaderboard32.330
- normalized accuracy on BBH (3-Shot)Open LLM Leaderboard22.060
- exact match on MATH Lvl 5 (4-Shot)Open LLM Leaderboard5.290
- acc_norm on GPQA (0-shot)Open LLM Leaderboard3.800
- acc_norm on MuSR (0-shot)Open LLM Leaderboard8.820
- accuracy on MMLU-PRO (5-shot)test set Open LLM Leaderboard22.570