QuantFactory/llama3.1-gutenberg-8B-GGUF
This is quantized version of nbeerbower/llama3.1-gutenberg-8B created using llama.cpp
Original Model Card
llama3.1-gutenberg-8B
VAGOsolutions/Llama-3.1-SauerkrautLM-8b-Instruct finetuned on jondurbin/gutenberg-dpo-v0.1.
Method
Finetuned using 2x RTX 4060 for 3 epochs.
- Downloads last month
- 42
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
HF Inference API was unable to determine this model’s pipeline type.