leaderboard-pr-bot commited on
Commit
c0bd1c0
·
verified ·
1 Parent(s): 3b35e73

Adding Evaluation Results

Browse files

This is an automated PR created with https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr

The purpose of this PR is to add evaluation results from the Open LLM Leaderboard to your model card.

If you encounter any issues, please report them to https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr/discussions

Files changed (1) hide show
  1. README.md +115 -7
README.md CHANGED
@@ -1,12 +1,7 @@
1
  ---
2
- license: apache-2.0
3
- datasets:
4
- - allenai/tulu-3-sft-mixture
5
- - allenai/llama-3.1-tulu-3-8b-preference-mixture
6
  language:
7
  - en
8
- base_model:
9
- - HuggingFaceTB/SmolLM2-1.7B
10
  library_name: transformers
11
  tags:
12
  - Tulu3
@@ -18,7 +13,107 @@ tags:
18
  - SFT
19
  - DPO
20
  - GGUF
 
 
 
 
 
21
  pipeline_tag: text-generation
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
22
  ---
23
 
24
  # SmolLM2 1.7b Instruction Tuned & DPO Aligned through Tulu 3!
@@ -68,4 +163,17 @@ outputs = model.generate(inputs)
68
  print(tokenizer.decode(outputs[0]))
69
  ```
70
 
71
- You can also use the model in llama.cpp through the [gguf version](https://huggingface.co/SultanR/SmolTulu-1.7b-Instruct-GGUF)!
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
 
 
 
 
2
  language:
3
  - en
4
+ license: apache-2.0
 
5
  library_name: transformers
6
  tags:
7
  - Tulu3
 
13
  - SFT
14
  - DPO
15
  - GGUF
16
+ base_model:
17
+ - HuggingFaceTB/SmolLM2-1.7B
18
+ datasets:
19
+ - allenai/tulu-3-sft-mixture
20
+ - allenai/llama-3.1-tulu-3-8b-preference-mixture
21
  pipeline_tag: text-generation
22
+ model-index:
23
+ - name: SmolTulu-1.7b-it-v0
24
+ results:
25
+ - task:
26
+ type: text-generation
27
+ name: Text Generation
28
+ dataset:
29
+ name: IFEval (0-Shot)
30
+ type: HuggingFaceH4/ifeval
31
+ args:
32
+ num_few_shot: 0
33
+ metrics:
34
+ - type: inst_level_strict_acc and prompt_level_strict_acc
35
+ value: 65.41
36
+ name: strict accuracy
37
+ source:
38
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=SultanR/SmolTulu-1.7b-it-v0
39
+ name: Open LLM Leaderboard
40
+ - task:
41
+ type: text-generation
42
+ name: Text Generation
43
+ dataset:
44
+ name: BBH (3-Shot)
45
+ type: BBH
46
+ args:
47
+ num_few_shot: 3
48
+ metrics:
49
+ - type: acc_norm
50
+ value: 12.26
51
+ name: normalized accuracy
52
+ source:
53
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=SultanR/SmolTulu-1.7b-it-v0
54
+ name: Open LLM Leaderboard
55
+ - task:
56
+ type: text-generation
57
+ name: Text Generation
58
+ dataset:
59
+ name: MATH Lvl 5 (4-Shot)
60
+ type: hendrycks/competition_math
61
+ args:
62
+ num_few_shot: 4
63
+ metrics:
64
+ - type: exact_match
65
+ value: 2.64
66
+ name: exact match
67
+ source:
68
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=SultanR/SmolTulu-1.7b-it-v0
69
+ name: Open LLM Leaderboard
70
+ - task:
71
+ type: text-generation
72
+ name: Text Generation
73
+ dataset:
74
+ name: GPQA (0-shot)
75
+ type: Idavidrein/gpqa
76
+ args:
77
+ num_few_shot: 0
78
+ metrics:
79
+ - type: acc_norm
80
+ value: 2.57
81
+ name: acc_norm
82
+ source:
83
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=SultanR/SmolTulu-1.7b-it-v0
84
+ name: Open LLM Leaderboard
85
+ - task:
86
+ type: text-generation
87
+ name: Text Generation
88
+ dataset:
89
+ name: MuSR (0-shot)
90
+ type: TAUR-Lab/MuSR
91
+ args:
92
+ num_few_shot: 0
93
+ metrics:
94
+ - type: acc_norm
95
+ value: 1.92
96
+ name: acc_norm
97
+ source:
98
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=SultanR/SmolTulu-1.7b-it-v0
99
+ name: Open LLM Leaderboard
100
+ - task:
101
+ type: text-generation
102
+ name: Text Generation
103
+ dataset:
104
+ name: MMLU-PRO (5-shot)
105
+ type: TIGER-Lab/MMLU-Pro
106
+ config: main
107
+ split: test
108
+ args:
109
+ num_few_shot: 5
110
+ metrics:
111
+ - type: acc
112
+ value: 7.89
113
+ name: accuracy
114
+ source:
115
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=SultanR/SmolTulu-1.7b-it-v0
116
+ name: Open LLM Leaderboard
117
  ---
118
 
119
  # SmolLM2 1.7b Instruction Tuned & DPO Aligned through Tulu 3!
 
163
  print(tokenizer.decode(outputs[0]))
164
  ```
165
 
166
+ You can also use the model in llama.cpp through the [gguf version](https://huggingface.co/SultanR/SmolTulu-1.7b-Instruct-GGUF)!
167
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
168
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_SultanR__SmolTulu-1.7b-it-v0)
169
+
170
+ | Metric |Value|
171
+ |-------------------|----:|
172
+ |Avg. |15.45|
173
+ |IFEval (0-Shot) |65.41|
174
+ |BBH (3-Shot) |12.26|
175
+ |MATH Lvl 5 (4-Shot)| 2.64|
176
+ |GPQA (0-shot) | 2.57|
177
+ |MuSR (0-shot) | 1.92|
178
+ |MMLU-PRO (5-shot) | 7.89|
179
+