tmp
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the task arithmetic merge method using ./evol_merge_storage/input_models/Mistral-7B-Instruct-v0.2_674785087 as a base.
Models Merged
The following models were included in the merge:
- ./evol_merge_storage/input_models/RakutenAI-7B-chat_2028928689
- ./evol_merge_storage/input_models/OpenMath-Mistral-7B-v0.1-hf_3930120330
Configuration
The following YAML configuration was used to produce this model:
base_model: ./evol_merge_storage/input_models/Mistral-7B-Instruct-v0.2_674785087
dtype: bfloat16
merge_method: task_arithmetic
parameters:
int8_mask: 1.0
normalize: 0.0
slices:
- sources:
- layer_range: [0, 4]
model: ./evol_merge_storage/input_models/RakutenAI-7B-chat_2028928689
parameters:
weight: 0.4520595057576112
- layer_range: [0, 4]
model: ./evol_merge_storage/input_models/OpenMath-Mistral-7B-v0.1-hf_3930120330
parameters:
weight: 0.1600776520249821
- layer_range: [0, 4]
model: ./evol_merge_storage/input_models/Mistral-7B-Instruct-v0.2_674785087
parameters:
weight: 0.5490392773476699
- sources:
- layer_range: [4, 8]
model: ./evol_merge_storage/input_models/RakutenAI-7B-chat_2028928689
parameters:
weight: 0.4227443099700199
- layer_range: [4, 8]
model: ./evol_merge_storage/input_models/OpenMath-Mistral-7B-v0.1-hf_3930120330
parameters:
weight: -0.30631262406307586
- layer_range: [4, 8]
model: ./evol_merge_storage/input_models/Mistral-7B-Instruct-v0.2_674785087
parameters:
weight: 0.6904255251091812
- sources:
- layer_range: [8, 12]
model: ./evol_merge_storage/input_models/RakutenAI-7B-chat_2028928689
parameters:
weight: 0.27934507955064164
- layer_range: [8, 12]
model: ./evol_merge_storage/input_models/OpenMath-Mistral-7B-v0.1-hf_3930120330
parameters:
weight: 0.13357572581279714
- layer_range: [8, 12]
model: ./evol_merge_storage/input_models/Mistral-7B-Instruct-v0.2_674785087
parameters:
weight: 1.0878530319347262
- sources:
- layer_range: [12, 16]
model: ./evol_merge_storage/input_models/RakutenAI-7B-chat_2028928689
parameters:
weight: 0.2797021800421193
- layer_range: [12, 16]
model: ./evol_merge_storage/input_models/OpenMath-Mistral-7B-v0.1-hf_3930120330
parameters:
weight: -0.20082135736432433
- layer_range: [12, 16]
model: ./evol_merge_storage/input_models/Mistral-7B-Instruct-v0.2_674785087
parameters:
weight: 0.8701476132113257
- sources:
- layer_range: [16, 20]
model: ./evol_merge_storage/input_models/RakutenAI-7B-chat_2028928689
parameters:
weight: 0.3344752410343695
- layer_range: [16, 20]
model: ./evol_merge_storage/input_models/OpenMath-Mistral-7B-v0.1-hf_3930120330
parameters:
weight: 0.4042316772497608
- layer_range: [16, 20]
model: ./evol_merge_storage/input_models/Mistral-7B-Instruct-v0.2_674785087
parameters:
weight: 0.6927692531006349
- sources:
- layer_range: [20, 24]
model: ./evol_merge_storage/input_models/RakutenAI-7B-chat_2028928689
parameters:
weight: -0.21516362235239625
- layer_range: [20, 24]
model: ./evol_merge_storage/input_models/OpenMath-Mistral-7B-v0.1-hf_3930120330
parameters:
weight: 0.24938519228176126
- layer_range: [20, 24]
model: ./evol_merge_storage/input_models/Mistral-7B-Instruct-v0.2_674785087
parameters:
weight: -0.04429340576598181
- sources:
- layer_range: [24, 28]
model: ./evol_merge_storage/input_models/RakutenAI-7B-chat_2028928689
parameters:
weight: 0.5010710670606616
- layer_range: [24, 28]
model: ./evol_merge_storage/input_models/OpenMath-Mistral-7B-v0.1-hf_3930120330
parameters:
weight: 0.7228729104891786
- layer_range: [24, 28]
model: ./evol_merge_storage/input_models/Mistral-7B-Instruct-v0.2_674785087
parameters:
weight: 0.20859571492467427
- sources:
- layer_range: [28, 32]
model: ./evol_merge_storage/input_models/RakutenAI-7B-chat_2028928689
parameters:
weight: 0.38158489002927837
- layer_range: [28, 32]
model: ./evol_merge_storage/input_models/OpenMath-Mistral-7B-v0.1-hf_3930120330
parameters:
weight: 0.3831129222059622
- layer_range: [28, 32]
model: ./evol_merge_storage/input_models/Mistral-7B-Instruct-v0.2_674785087
parameters:
weight: 0.3132094789750319
- Downloads last month
- 11
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.