gemma-2b-awq-int4
gemma-2b-awq-int4 is a version of the 2B base model model that was quantized using the AWQ method developed by Lin et al. (2023).
Please refer to the Original Gemma Model Card for details about the model preparation and training processes.
Dependencies
autoawq==0.2.5
– AutoAWQ was used to quantize the gemma-2b model.vllm==0.4.2
– vLLM was used to host models for benchmarking.
- Downloads last month
- 2
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.