--- language: - en license: apache-2.0 tags: - nvidia - code - math base_model: - mistralai/Mistral-7B-v0.1 datasets: - nvidia/OpenMathInstruct-1 model-index: - name: OpenMath-Mistral-7B-v0.1-hf results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 59.39 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nvidia/OpenMath-Mistral-7B-v0.1-hf name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 81.78 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nvidia/OpenMath-Mistral-7B-v0.1-hf name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 59.34 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nvidia/OpenMath-Mistral-7B-v0.1-hf name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 46.13 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nvidia/OpenMath-Mistral-7B-v0.1-hf name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 77.27 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nvidia/OpenMath-Mistral-7B-v0.1-hf name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 0.08 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nvidia/OpenMath-Mistral-7B-v0.1-hf name: Open LLM Leaderboard --- # OpenMath-Mistral-7B-v0.1-hf OpenMath models were designed to solve mathematical problems by integrating text-based reasoning with code blocks executed by Python interpreter. The models were trained on [OpenMathInstruct-1](https://huggingface.co/datasets/nvidia/OpenMathInstruct-1), a math instruction tuning dataset with 1.8M problem-solution pairs generated using permissively licensed [Mixtral-8x7B](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) model.
greedy | majority@50 | |||
model | GSM8K | MATH | GMS8K | MATH |
OpenMath-CodeLlama-7B (nemo | HF) | 75.9 | 43.6 | 84.8 | 55.6 |
OpenMath-Mistral-7B (nemo | HF) | 80.2 | 44.5 | 86.9 | 57.2 |
OpenMath-CodeLlama-13B (nemo | HF) | 78.8 | 45.5 | 86.8 | 57.6 |
OpenMath-CodeLlama-34B (nemo | HF) | 80.7 | 48.3 | 88.0 | 60.2 |
OpenMath-Llama2-70B (nemo | HF) | 84.7 | 46.3 | 90.1 | 58.3 |
OpenMath-CodeLlama-70B (nemo | HF) | 84.6 | 50.7 | 90.8 | 60.4 |