YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

QuantLM_2.3B_8bit_Unpacked - AWQ

Original model description:

license: apache-2.0

QuantLM 2.3B 8 bit

QuantLM, unpacked to FP16 format - compatible with FP16 GEMMs. After unpacking, QuantLM has the same architecture as LLaMa.

import transformers as tf, torch
model_name = "SpectraSuite/QuantLM_2.3B_8bit_Unpacked"
# Please adjust the temperature, repetition penalty, top_k, top_p and other sampling parameters according to your needs.
pipeline = tf.pipeline("text-generation", model=model_id, model_kwargs={"torch_dtype": torch.float16}, device_map="auto")
# These are base (pretrained) LLMs that are not instruction and chat tuned. You may need to adjust your prompt accordingly.
pipeline("Once upon a time")
Downloads last month
3
Safetensors
Model size
530M params
Tensor type
I32
·
FP16
·
Inference API
Unable to determine this model's library. Check the docs .