Update README.md
Browse files
README.md
CHANGED
@@ -3,7 +3,11 @@ license: apache-2.0
|
|
3 |
datasets:
|
4 |
- oscar-corpus/oscar
|
5 |
---
|
6 |
-
This model is a result of second stage pre-training of Google's Gemma 2B (https://huggingface.co/google/gemma-2b) for roughly 150B tokens on the combination of English + Russian subset of oscar and wiki datasets.
|
|
|
|
|
7 |
Goal of this project is to further research cross-linguistic capabilities of open-source LLMs and to create a strong open-source foundational LLM that would be fluent in Russian language. More about it will be in the upcoming blog and/or research paper.
|
8 |
This model was pre-trained using EasyLM's fork as a framework (JAX) on Google's v4-32 TPU which was generously provided under the TRC program. The model reached ~ 1.5 in training loss, LR was roughly 5e-5.
|
|
|
|
|
9 |
I'm planning on releasing a chat model that would ungergo full-parameter SFT and DPO on Ilya Gusev's datasets.
|
|
|
3 |
datasets:
|
4 |
- oscar-corpus/oscar
|
5 |
---
|
6 |
+
This model is a result of second stage pre-training of Google's Gemma 2B (https://huggingface.co/google/gemma-2b) for roughly 150B tokens on the combination of English + Russian subset of oscar and wiki datasets.
|
7 |
+
|
8 |
+
This is a raw pre-trained model, created with further fine-tuning in mind.
|
9 |
Goal of this project is to further research cross-linguistic capabilities of open-source LLMs and to create a strong open-source foundational LLM that would be fluent in Russian language. More about it will be in the upcoming blog and/or research paper.
|
10 |
This model was pre-trained using EasyLM's fork as a framework (JAX) on Google's v4-32 TPU which was generously provided under the TRC program. The model reached ~ 1.5 in training loss, LR was roughly 5e-5.
|
11 |
+
|
12 |
+
|
13 |
I'm planning on releasing a chat model that would ungergo full-parameter SFT and DPO on Ilya Gusev's datasets.
|