Model Card for parasail-ai/Mistral-7B-Instruct-v0.3-GPTQ-4bit

GPTQ quantized 4-bit version of mistralai/Mistral-7B-Instruct-v0.3. See original model card for more information.

Downloads last month
156
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Collection including parasail-ai/Mistral-7B-Instruct-v0.3-GPTQ-4bit