damerajee commited on
Commit
53615b1
·
verified ·
1 Parent(s): 1d054df

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -1
README.md CHANGED
@@ -60,7 +60,6 @@ tokenizer.batch_decode(outputs)
60
  # Training details
61
  * The model was loaded in 4-Bit
62
  * The target modules include "q_proj", "k_proj", "v_proj", "o_proj"
63
- * The training took about 2 hours approximately
64
  * The fine-tuning was done on a free goggle colab with a single t4 GPU (huge thanks to unsloth for this)
65
  * Even though the Full dataset was almost 3 million The lora model was finetuned on only 1 million row for each language
66
 
 
60
  # Training details
61
  * The model was loaded in 4-Bit
62
  * The target modules include "q_proj", "k_proj", "v_proj", "o_proj"
 
63
  * The fine-tuning was done on a free goggle colab with a single t4 GPU (huge thanks to unsloth for this)
64
  * Even though the Full dataset was almost 3 million The lora model was finetuned on only 1 million row for each language
65