merge
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the TIES merge method using Qwen/Qwen2.5-72B as a base.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
# Qwen2.5-72B-2x-Instruct-TIES-v1.0
models:
- model: abacusai/Dracarys2-72B-Instruct # Coding enhancement Fine-tuning
parameters:
density: 1.0
weight: 1.0
- model: rombodawg/Rombos-LLM-V2.5-Qwen-72b # Continuous Fine-tuning
parameters:
density: 1.0
weight: 1.0
merge_method: ties
base_model: Qwen/Qwen2.5-72B # Reflecting the most recent models as of 11/11/2024
parameters:
normalize: true
int8_mask: false
dtype: bfloat16
tokenizer_source: union # Experimental, preliminarily seems to garner good results
- Downloads last month
- 2
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for BenevolenceMessiah/Qwen2.5-72B-2x-Instruct-TIES-v1.0
Merge model
this model