LoftQ Initialization
| Paper | Code | PEFT Example |
LoftQ (LoRA-fine-tuning-aware Quantization) provides a quantized backbone Q and LoRA adapters A and B, given a full-precision pre-trained weight W.
This model, LoftQ/Llama-2-7b-hf-fp16-64rank-gsm8k
, is LoRA fine-tuned from LLAMA-2-7b on GSM8K dataset.
Model Info
LoRA adapters
- rank: 64
- lora_alpha: 16
- target_modules: ["down_proj", "up_proj", "q_proj", "k_proj", "v_proj", "o_proj", "gate_proj"]
Usage
Inference Here is an example code for inference after the model has been fine-tuned on GSM8K.
import torch
from transformers import AutoModelForCausalLM
MODEL_ID = "LoftQ/Llama-2-7b-hf-fp16-64rank-gsm8k"
model = AutoModelForCausalLM.from_pretrained(
MODEL_ID,
torch_dtype=torch.bfloat16, # you may change it with different models
token=YOUR_HF_TOKEN,
)
# you can also merge the LoRA adapters to the backbone if you like
model = model.merge_and_unload()
# Do inference with `model` ...
See full evaluation on GSM8K on Github.
Experiment Results
We have conducted experiments on supervised fine-tuning of GSM8K and WikiText-2.
Model | Bits | Rank | LoRA Initial | GSM8K |
---|---|---|---|---|
LLAMA-2-7b | 16 | 64 | Gaussian + 0 | 36.9 |
LLAMA-2-7b | 4 | 64 | Gaussian + 0 (QLoRA) | 35.1 |
LLAMA-2-7b | 4 | 64 | LoftQ | 35.0 |
Citation
@article{li2023loftq,
title={Loftq: Lora-fine-tuning-aware quantization for large language models},
author={Li, Yixiao and Yu, Yifan and Liang, Chen and He, Pengcheng and Karampatziakis, Nikos and Chen, Weizhu and Zhao, Tuo},
journal={arXiv preprint arXiv:2310.08659},
year={2023}
}
- Downloads last month
- 6
Model tree for LoftQ/Llama-2-7b-hf-fp16-64rank-gsm8k
Base model
meta-llama/Llama-2-7b-hf