Solshine commited on
Commit
04d5113
·
verified ·
1 Parent(s): 364a16d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -0
README.md CHANGED
@@ -24,6 +24,8 @@ https://firstdonoharm.dev/version/3/0/cl-eco-extr.html
24
  - **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit
25
  - **Dataset Used :** CopyleftCultivars/Training-Ready_NF_chatbot_conversation_history currated from real-world agriculture and natural farming questions and the best answers from a previous POC chatbot which were then lightly editted by domain experts
26
 
 
 
27
  Shout out to roger j (bhugxer) for help with the dataset and training framework.
28
 
29
  This mistral model was trained with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
 
24
  - **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit
25
  - **Dataset Used :** CopyleftCultivars/Training-Ready_NF_chatbot_conversation_history currated from real-world agriculture and natural farming questions and the best answers from a previous POC chatbot which were then lightly editted by domain experts
26
 
27
+ Using real-world user data from a previous farmer assistant chatbot service and additional curated datasets (prioritizing sustainable regenerative organic farming practices,) Gemma 2B and Mistral 7B LLMs were iteratively fine-tuned and tested against eachother as well as basic benchmarking, whereby the Gemma 2B fine-tune emerged victorious, while this Mistral fine-tune was still viable. LORA adapters were saved for each model.
28
+
29
  Shout out to roger j (bhugxer) for help with the dataset and training framework.
30
 
31
  This mistral model was trained with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.