Update README.md
Browse files
README.md
CHANGED
@@ -64,5 +64,6 @@ tokenizer.batch_decode(outputs)
|
|
64 |
* Even though the Full dataset was almost 3 million The lora model was finetuned on only 1 million row for each language
|
65 |
|
66 |
# Limitations
|
67 |
-
The model was not fully trained on all the dataset and Much evaluation was not done so any contributions will be helpful
|
|
|
68 |
As of right now this is a smaller model Better model trained on better dataset will be released
|
|
|
64 |
* Even though the Full dataset was almost 3 million The lora model was finetuned on only 1 million row for each language
|
65 |
|
66 |
# Limitations
|
67 |
+
The model was not fully trained on all the dataset and Much evaluation was not done so any contributions will be helpful.
|
68 |
+
|
69 |
As of right now this is a smaller model Better model trained on better dataset will be released
|