Trained on a 3090. took 9 hours, it's 27s/it and default configured to 1218 iterations. commit: a2607fa - https://github.com/tloen/alpaca-lora
it's for the 7B model
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
HF Inference API was unable to determine this model's library.