metadata
base_model: deepseek-ai/DeepSeek-R1-Distill-Llama-8B
library_name: transformers
pipeline_tag: text-generation
tags:
- llama-cpp
- DeepSeek-R1-Distill-Llama-8B
- gguf
- Q2_K
- 8b
- llama
- DeepSeek-R1
- llama-cpp
- deepseek-ai
- code
- math
- chat
- roleplay
- text-generation
- safetensors
- nlp
- code
roleplaiapp/DeepSeek-R1-Distill-Llama-8B-Q2_K-GGUF
Repo: roleplaiapp/DeepSeek-R1-Distill-Llama-8B-Q2_K-GGUF
Original Model: DeepSeek-R1-Distill-Llama-8B
Organization: deepseek-ai
Quantized File: deepseek-r1-distill-llama-8b-q2_k.gguf
Quantization: GGUF
Quantization Method: Q2_K
Use Imatrix: False
Split Model: False
Overview
This is an GGUF Q2_K quantized version of DeepSeek-R1-Distill-Llama-8B.
Quantization By
I often have idle A100 GPUs while building/testing and training the RP app, so I put them to use quantizing models. I hope the community finds these quantizations useful.
Andrew Webby @ RolePlai