--- license: apache-2.0 library_name: transformers inference: true tags: - unsloth - trl - sft --- # Uploaded model - **Developed by:** Sakalti - **License:** apache2.0 - **Finetuned from model :** Sakalti/Saba1-1.8B This qwen model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. This models was using "kunishou/databricks-dolly-15k-ja" This dataset is licensed under CC BY SA 3.0 Last Update : 2023-05-28 [](https://github.com/unslothai/unsloth) paramaters(no embedding layer) 1.31B paramaters(yes embedding layer) 1.54B layers: 28layers # 概要 Saba1.5はSaba1をファインチューニングしたモデルです。 性能はまだわかりませんが上昇してるでしょう。