8-bit GPTQ Quantization of huihui-ai/Qwen2.5-32B-Instruct-abliterated
This is an uncensored version of Qwen2.5-32B-Instruct created with abliteration (see this article to know more about it).
- Downloads last month
- 2
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for akhbar/Qwen2.5-32B-Instruct-abliterated-8bit-128g-actorder_True-GPTQ
Base model
Qwen/Qwen2.5-32B
Finetuned
Qwen/Qwen2.5-32B-Instruct