--- base_model: - nitky/EZO-QwQ-32B-Preview - Saxo/Linkbricks-Horizon-AI-Japanese-Base-32B - huihui-ai/QwQ-32B-Preview-abliterated - Qwen/QwQ-32B-Preview library_name: transformers tags: - mergekit - merge --- # Linkbricks-Horizon-AI-Japanese-Advanced-V1-32B This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [Qwen/QwQ-32B-Preview](https://huggingface.co/Qwen/QwQ-32B-Preview) as a base. ### Models Merged The following models were included in the merge: * [nitky/EZO-QwQ-32B-Preview](https://huggingface.co/nitky/EZO-QwQ-32B-Preview) * [Saxo/Linkbricks-Horizon-AI-Japanese-Base-32B](https://huggingface.co/Saxo/Linkbricks-Horizon-AI-Japanese-Base-32B) * [huihui-ai/QwQ-32B-Preview-abliterated](https://huggingface.co/huihui-ai/QwQ-32B-Preview-abliterated) ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: Qwen/QwQ-32B-Preview dtype: bfloat16 merge_method: dare_ties parameters: int8_mask: 1.0 slices: - sources: - layer_range: [0, 64] model: Qwen/QwQ-32B-Preview - layer_range: [0, 64] model: Saxo/Linkbricks-Horizon-AI-Japanese-Base-32B parameters: density: 0.53 weight: 0.3 - layer_range: [0, 64] model: huihui-ai/QwQ-32B-Preview-abliterated parameters: density: 0.53 weight: 0.4 - layer_range: [0, 64] model: nitky/EZO-QwQ-32B-Preview parameters: density: 0.53 weight: 0.3 tokenizer_source: union ```