metadata
license: llama3.2
library_name: transformers
tags:
- mergekit
- merge
base_model:
- prithivMLmods/Llama-Chat-Summary-3.2-3B
- prithivMLmods/Llama-Thinker-3B-Preview2
- NousResearch/Hermes-3-Llama-3.2-3B
model-index:
- name: Hermes-Llama-3.2-CoT-Summary
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 48.3
name: strict accuracy
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Triangle104/Hermes-Llama-3.2-CoT-Summary
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 17.39
name: normalized accuracy
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Triangle104/Hermes-Llama-3.2-CoT-Summary
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 8.16
name: exact match
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Triangle104/Hermes-Llama-3.2-CoT-Summary
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 0.78
name: acc_norm
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Triangle104/Hermes-Llama-3.2-CoT-Summary
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 4.69
name: acc_norm
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Triangle104/Hermes-Llama-3.2-CoT-Summary
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 21.13
name: accuracy
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Triangle104/Hermes-Llama-3.2-CoT-Summary
name: Open LLM Leaderboard
Merge
This is a merge of pre-trained language models created using mergekit.
Merge Details
Hermes with some CoT and Chat Summary added.
Quant: https://huggingface.co/Triangle104/Hermes-Llama-3.2-CoT-Summary-Q4_K_M-GGUF
Merge Method
This model was merged using the TIES merge method using NousResearch/Hermes-3-Llama-3.2-3B as a base.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
models:
- model: NousResearch/Hermes-3-Llama-3.2-3B
#no parameters necessary for base model
- model: prithivMLmods/Llama-Thinker-3B-Preview2
parameters:
density: 0.5
weight: 0.5
- model: prithivMLmods/Llama-Chat-Summary-3.2-3B
parameters:
density: 0.5
weight: 0.5
merge_method: ties
base_model: NousResearch/Hermes-3-Llama-3.2-3B
parameters:
normalize: false
int8_mask: true
dtype: float16
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 16.74 |
IFEval (0-Shot) | 48.30 |
BBH (3-Shot) | 17.39 |
MATH Lvl 5 (4-Shot) | 8.16 |
GPQA (0-shot) | 0.78 |
MuSR (0-shot) | 4.69 |
MMLU-PRO (5-shot) | 21.13 |