--- library_name: transformers tags: [] --- # GPTQ 4bit quantized version of [DeepSeek-R1-Distill-Qwen-14B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-14B) ## Model Details See details on the official page of the model: [DeepSeek-R1-Distill-Qwen-14B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-14B) Quantized using [GPTQModel](https://github.com/ModelCloud/GPTQModel) using [wikitext2 dataset](https://github.com/ModelCloud/GPTQModel/blob/main/examples/quantization/basic_usage_wikitext2.py) with `nsamples=256` and `seqlen=1024`. Quantization config: ``` bits=4, group_size=128, desc_act=False, damp_percent=0.01, ``` Minimum VRAM required: ~11GB ## How to use Using `transformers` library with integrated GPTQ support: ```python from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig model_name = "avoroshilov/DeepSeek-R1-Distill-Qwen-14B-GPTQ_4bit-128g" tokenizer = AutoTokenizer.from_pretrained(model_name) quantized_model = AutoModelForCausalLM.from_pretrained(model_name, device_map='cuda') chat = [{"role": "user", "content": "Why is grass green?"},] question_tokens = tokenizer.apply_chat_template(chat, add_generation_prompt=True, return_tensors="pt").to(quantized_model.device) answer_tokens = quantized_model.generate(question_tokens, generation_config=GenerationConfig(max_length=2048, ))[0] print(tokenizer.decode(answer_tokens)) ```