llama-7b-glora 🦙
This model was built via parameter-efficient GLoRA finetuning of llama-7b on the shareGPT dataset. We adapt only the attention layers using GLoRA.
Model license: This model is under a non-commercial license (see the LICENSE file) same as LLaMA.