any chance to share the conversion code?

#9
by haydenhong - opened

Thanks again for the great works. Do you have plan to share your works with the public? As in how you are able to combine lora layers into the llama model and yet able to load the quantized weight into a Llama model. The standard "lora" added model would have extra lora-A and lora-B layers etc., which would not be able to fit into a Llama model from Transformer library, I would think. How is this accomplished if you do not mind sharing? Share or not, thanks!

haydenhong changed discussion status to closed

Sign up or log in to comment