metadata
language:
- it
tags:
- pretrained
- pytorch
- causal-lm
- autoround
- intel-autoround
- woq
- awq
- autoawq
- auto-awq
- intel
- italia
- italiano
- italian
license: mit
license_link: https://huggingface.co/iGeniusAI/Italia-9B-Instruct-v0.1/blob/main/LICENSE
model_name: Italia 9B Instruct v0.1
base_model:
- iGeniusAI/Italia-9B-Instruct-v0.1
inference: false
model_creator: iGeniusAI
pipeline_tag: text-generation
prompt_template: '{prompt} '
quantized_by: fbaldassarri
Model Information
Quantized version of iGeniusAI/Italia-9B-Instruct-v0.1 using torch.float32 for quantization tuning.
- 4 bits (INT4)
- group size = 128
- Symmetrical Quantization
- Method AutoAWQ
Quantization framework: Intel AutoRound v0.4.3
Note: this INT4 version of Italia-9B-Instruct-v0.1 has been quantized to run inference through CPU.
Replication Recipe
Step 1 Install Requirements
I suggest to install requirements into a dedicated python-virtualenv or a conda enviroment.
wget https://github.com/intel/auto-round/archive/refs/tags/v0.4.3.tar.gz
tar -xvzf v0.4.3.tar.gz
cd auto-round-0.4.3
pip install -r requirements-cpu.txt --upgrade
Step 2 Build Intel AutoRound wheel from sources
pip install -vvv --no-build-isolation -e .[cpu]
Step 3 Script for Quantization
from transformers import AutoModelForCausalLM, AutoTokenizer, GPTNeoXModel
model_name = "iGeniusAI/Italia-9B-Instruct-v0.1"
model = GPTNeoXModel.from_pretrained(model_name, trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained(model_name)
from auto_round import AutoRound
bits, group_size, sym, device, amp = 4, 128, True, 'cpu', False
autoround = AutoRound(model, tokenizer, nsamples=128, iters=200, seqlen=512, batch_size=4, bits=bits, group_size=group_size, sym=sym, device=device, amp=amp)
autoround.quantize()
output_dir = "./AutoRound/iGeniusAI_Italia-9B-Instruct-v0.1-autoawq-int4-gs128-sym"
autoround.save_quantized(output_dir, format='auto_awq', inplace=True)
Note: the GPTNeoXSdpaAttention
class is deprecated in favor of simply modifying the config._attn_implementation
attribute of the GPTNeoXAttention
class. So this require transformers<4.48.
License
Disclaimer
This quantized model comes with no warranty. It has been developed only for research purposes.