Arnav0400's picture
removed references
08243e7
|
raw
history blame
315 Bytes

llama-7b-glora 🦙

This model was built via parameter-efficient GLoRA finetuning of llama-7b on the shareGPT dataset. We adapt only the attention layers using GLoRA.

Model license: This model is under a non-commercial license (see the LICENSE file) same as LLaMA.