GPTQ 4bit quantized version of DeepSeek-R1-Distill-Qwen-14B
Model Details
See details on the official page of the model: DeepSeek-R1-Distill-Qwen-14B
Quantized using GPTQModel using wikitext2 dataset with nsamples=256
and seqlen=1024
. Quantization config:
bits=4,
group_size=128,
desc_act=False,
damp_percent=0.01,
Minimum VRAM required: ~11GB
How to use
Using transformers
library with integrated GPTQ support:
from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
model_name = "avoroshilov/DeepSeek-R1-Distill-Qwen-14B-GPTQ_4bit-128g"
tokenizer = AutoTokenizer.from_pretrained(model_name)
quantized_model = AutoModelForCausalLM.from_pretrained(model_name, device_map='cuda')
chat = [{"role": "user", "content": "Why is grass green?"},]
question_tokens = tokenizer.apply_chat_template(chat, add_generation_prompt=True, return_tensors="pt").to(quantized_model.device)
answer_tokens = quantized_model.generate(question_tokens, generation_config=GenerationConfig(max_length=2048, ))[0]
print(tokenizer.decode(answer_tokens))
- Downloads last month
- 74
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.