--- license: apache-2.0 --- Llama 2 7B quantized with AutoGPTQ V0.3.0. * Group size: 32 * Data type: INT4 This model is compatible with the first version of QA-LoRA. To fine-tune it with QA-LoRA, follow this tutorial: [Fine-tune Quantized Llama 2 on Your GPU with QA-LoRA](https://kaitchup.substack.com/p/fine-tune-quantized-llama-2-on-your)