ThreatFlux-Qwen2.5-7B-Instruct
Model Information
- Author: Wyatt Roersma
- Organization: ThreatFlux
- Model Type: Fine-tuned Language Model
- Base Model: Qwen2.5-Coder-7B-Instruct
- License: Apache 2.0
This model is a specialized fine-tuned version of Qwen2.5-Coder-7B-Instruct optimized for YARA rule generation and analysis. It inherits the powerful code generation and reasoning capabilities of the base model while adding specialized knowledge for security applications.
Deployment Methods
Ollama
ollama run hf.co/vtriple/Qwen-2.5-7B-Threatflux
llama-cpp-python
from llama_cpp import Llama
llm = Llama.from_pretrained(
repo_id="vtriple/Qwen-2.5-7B-Threatflux",
filename="threatflux.gguf",
)
llm.create_chat_completion(
messages=[{"role": "user", "content": "Write a YARA rule for..."}]
)
llama.cpp
Install via Homebrew
brew install llama.cpp
Run the Model
llama-cli \
--hf-repo "vtriple/Qwen-2.5-7B-Threatflux" \
--hf-file threatflux.gguf \
-p "You are a helpful assistant" \
--conversation
For more details on llama.cpp implementation, refer to the llama.cpp documentation.
Model Details
Base Model Architecture
- Model Type: Causal Language Model
- Parameters: 7.61B (Base) / 6.53B (Non-Embedding)
- Architecture: Transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias
- Layers: 28
- Attention Heads: 28 for Q and 4 for KV (GQA)
- Context Length: 131,072 tokens
- Training Data: Built on Qwen2.5's 5.5 trillion token dataset
Fine-tuning Specifications
- Training Dataset: ~1,600 specialized samples curated by ThreatFlux
- Training Type: Instruction tuning
- Domain Focus: YARA rules, malware analysis, threat detection
Example Output
Here's an example of the model's YARA rule generation capabilities:
private rule Track_EXE_Files {
meta:
description = "Detects all EXE (Executable) files"
author = "ThreatFlux"
version = "1.0"
condition:
uint16(0) == 0x5A4D
}
The model provides detailed explanations of generated rules:
- Rule is marked private to prevent inclusion in final compiled rules
- Utilizes PE signature (MZ header) verification
- Includes condition logic explanation
- Provides technical details about hex values and their significance
Intended Use
This model is designed to assist security professionals in:
- Generating and optimizing YARA rules
- Analyzing malware patterns
- Supporting threat hunting workflows
- Enhancing detection capabilities
Performance Metrics and Testing
Testing Environment
- GPU: NVIDIA H100 NVL (48.3 TFLOPS)
- GPU Memory: 93.6 GB
- Memory Bandwidth: 2271.1 GB/s
- PCIe: 5.0 x16 (54.4 GB/s)
- CPU: AMD EPYC 9124 16-Core Processor
- System Memory: 193 GB
- Storage: SAMSUNG MZQLB7T6HALA-00AAZ
- CUDA Version: 12.4
- Network Speed: 8334.9/7516.3 Mbps (Up/Down)
Testing Results
- Total Training Time: ~45 hours
- Average Cost per Hour: $2.6667 (GPU)
- Testing Duration: Multiple sessions totaling approximately 23.953 hours
- Testing Environment: Ubuntu Latest with SSH access
Performance and Limitations
Strengths
- Specialized knowledge in YARA rule syntax and best practices
- Inherits Qwen2.5-Coder's strong code reasoning abilities
- Long context understanding for complex analysis
- Maintains mathematical and general coding competencies
Limitations
- Should be used as an assistant, not a replacement for security expertise
- Generated rules require human validation
- Performance varies based on deployment environment
- Inherits base model's limitations
Technical Specifications
Deployment Requirements
- Compatible with Hugging Face Transformers (requires version ≥4.37.0)
- Supports both CPU and GPU deployment
- Can utilize YaRN for long context processing
Configuration
For extended context length support (>32,768 tokens), add to config.json:
{
"rope_scaling": {
"factor": 4.0,
"original_max_position_embeddings": 32768,
"type": "yarn"
}
}
Training Details
This model was fine-tuned on the Qwen2.5-Coder-7B-Instruct base, which includes:
- Comprehensive code generation capabilities
- Strong mathematical reasoning
- Extended context understanding
- Security-focused enhancements
The fine-tuning process focused on:
- YARA rule syntax and structure
- Pattern matching optimization
- Security use cases
- Real-world application scenarios
License
This model inherits the Apache 2.0 license from its base model. See LICENSE for details.
Community Support
We're working to make this model freely accessible to everyone through a dedicated public inference server. If you'd like to support this initiative, you can contribute to our server fund: Support the ThreatFlux Public Server Your support helps maintain:
- Free public API access
- Consistent model availability
- Community training improvements
- Dedicated infrastructure
Citation
If you use this model in your work, please cite both this model and the original Qwen2.5-Coder work:
@article{hui2024qwen2,
title={Qwen2.5-Coder Technical Report},
author={Hui, Binyuan and Yang, Jian and Cui, Zeyu and Yang, Jiaxi and Liu, Dayiheng and Zhang, Lei and Liu, Tianyu and Zhang, Jiajun and Yu, Bowen and Dang, Kai and others},
journal={arXiv preprint arXiv:2409.12186},
year={2024}
}
@article{qwen2,
title={Qwen2 Technical Report},
author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
journal={arXiv preprint arXiv:2407.10671},
year={2024}
}
- Downloads last month
- 33