--- base_model: - Qwen/Qwen2.5-Coder-1.5B-Instruct - Qwen/Qwen2.5-1.5B-Instruct - Qwen/Qwen2.5-Math-1.5B-Instruct library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) as a base. ### Models Merged The following models were included in the merge: * [Qwen/Qwen2.5-Coder-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-1.5B-Instruct) * [Qwen/Qwen2.5-Math-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Math-1.5B-Instruct) ### Configuration The following YAML configuration was used to produce this model: ```yaml merge_method: ties # Use TIES for merging multiple models base_model: Qwen/Qwen2.5-1.5B-Instruct # Base model for the merge dtype: bfloat16 # Data type for the merged model models: - model: Qwen/Qwen2.5-1.5B-Instruct # Base model parameters: weight: 0.5 # Weight for the base model - model: Qwen/Qwen2.5-Math-1.5B-Instruct # Math-focused model parameters: density: 0.6 # Retain 60% of significant parameters weight: 0.3 # Weight for the math model - model: Qwen/Qwen2.5-Coder-1.5B-Instruct # Code-focused model parameters: density: 0.6 # Retain 60% of significant parameters weight: 0.2 # Weight for the coder model parameters: normalize: true # Normalize weights to ensure compatibility int8_mask: true # Optimize memory and computational efficiency ```