Qra is a series of LLMs adapted to the Polish language, resulting from a collaboration between the National Information Processing Institute (OPI) and Gdańsk University of Technology (PG).

Original base model can be found on HuggingFace here: https://huggingface.co/OPI-PG/Qra-1b

This GGUF file was quantized using Colab Notebook: https://colab.research.google.com/github/adithya-s-k/LLM-Alchemy-Chamber/blob/main/Quantization/GGUF_Quantization.ipynb

This is my first convertion of model. I don't know if whole process was correct (I mean model/gguf file gives strange answers, maybe I'm configuring it or setting not properly), but I'm fresh learner.

Pierwsze boty za płoty, jak to mówią.

Gratuluję twórcom, miejmy nadzieję, że będzie to Qra znosząca złote jajka.

Pozdro!

Downloads last month
13
GGUF
Model size
1.1B params
Architecture
llama

8-bit

Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.

Model tree for Fibogacci/Qra-1B-GGUF

Base model

OPI-PG/Qra-1b
Quantized
(4)
this model