--- license: mit base_model: - princeton-nlp/gemma-2-9b-it-SimPO library_name: transformers tags: - llama-cpp - GGUF - Quantization --- # Puhaha/gemma-2-9b-it-SimPO_q4_k_m This model was converted to GGUF format from [`princeton-nlp/gemma-2-9b-it-SimPO`](https://huggingface.co/princeton-nlp/gemma-2-9b-it-SimPO) using llama.cpp. Refer to the [original model card](https://huggingface.co/princeton-nlp/gemma-2-9b-it-SimPO) for more details on the model. Enjoy :D