roleplaiapp/DeepSeek-R1-Distill-Llama-70B-Q8_0-GGUF

Repo: roleplaiapp/DeepSeek-R1-Distill-Llama-70B-Q8_0-GGUF
Original Model: DeepSeek-R1-Distill-Llama-70B Organization: deepseek-ai Quantized File: deepseek-r1-distill-llama-70b-q8_0.gguf Quantization: GGUF Quantization Method: Q8_0
Use Imatrix: False
Split Model: True

Overview

This is an GGUF Q8_0 quantized version of DeepSeek-R1-Distill-Llama-70B.

Quantization By

I often have idle A100 GPUs while building/testing and training the RP app, so I put them to use quantizing models. I hope the community finds these quantizations useful.

Andrew Webby @ RolePlai

Downloads last month
23
GGUF
Model size
70.6B params
Architecture
llama

8-bit

Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for roleplaiapp/DeepSeek-R1-Distill-Llama-70B-Q8_0-GGUF

Quantized
(29)
this model