|
--- |
|
language: |
|
- nl |
|
pipeline_tag: text-generation |
|
tags: |
|
- granite |
|
- granite 3.0 |
|
- schaapje |
|
- chat |
|
license: apache-2.0 |
|
inference: false |
|
--- |
|
|
|
<p align="center"> |
|
<img src="sheep.png" alt="Schaapje logo" width="750"/> |
|
</p> |
|
|
|
# Schaapje-2B-Chat-V1.0-GGUF |
|
|
|
## Introduction |
|
|
|
This is a collection of GGUF files created from [Schaapje-2B-Chat-V1.0](https://huggingface.co/robinsmits/Schaapje-2B-Chat-V1.0) |
|
|
|
It contains the files in the following quantization formats: |
|
|
|
`Q5_0`, `Q5_K_M`, `Q6_K`, `Q8_0` |
|
|
|
## Requirements |
|
Before you can use the GGUF files you need to clone [llama.cpp repository](https://github.com/ggerganov/llama.cpp) and install it following the official guide. |
|
|
|
## Recommendation |
|
|
|
Experimenting with the llama.cpp parameters can have a big impact on the quality of the generated text. It is therefore recommended to do your own experimentation with different settings. In my own experiments it looks like quantization 'Q5_0' or better gives good quality. |
|
|