Update README.md
Browse files
README.md
CHANGED
@@ -174,7 +174,7 @@ The intermediate stage checkpoints are released in <a href="https://huggingface.
|
|
174 |
|
175 |
<details><summary>3. Optimizer States Before Annealing</summary>
|
176 |
|
177 |
-
|
178 |
</details>
|
179 |
|
180 |
|
@@ -213,11 +213,11 @@ Intermediate optimizer states will be released in a future update.
|
|
213 |
|
214 |
### What you can do with these pre-training resources
|
215 |
|
216 |
-
1. **Pre-train** your own LLM. You can use our data and curriculum to train a model that's just as powerful as YuLan-Mini.
|
217 |
-
2. Perform your own **learning rate annealing**. During the annealing phase, YuLan-Mini's learning ability is at its peak. You can resume training from the checkpoint before annealing and use your own dataset for learning rate annealing.
|
218 |
3. **Fine-tune** the Instruct version of the LLM. You can use the YuLan-Mini base model to train your own Instruct version.
|
219 |
4. **Training dynamics** research. You can use YuLan-Mini's intermediate checkpoints to explore internal changes during the pre-training process.
|
220 |
-
5. **Synthesize** your own data. You can use YuLan-Mini's data pipeline to clean and generate your own dataset.
|
221 |
|
222 |
---
|
223 |
|
@@ -255,6 +255,10 @@ python -m sglang.launch_server --model-path yulan-team/YuLan-Mini --port 30000 -
|
|
255 |
|
256 |
---
|
257 |
|
|
|
|
|
|
|
|
|
258 |
## License
|
259 |
|
260 |
- The code in this repository is released under the [MIT License](./LICENSE).
|
|
|
174 |
|
175 |
<details><summary>3. Optimizer States Before Annealing</summary>
|
176 |
|
177 |
+
<a href="https://huggingface.co/yulan-team/YuLan-Mini-Before-Annealing">YuLan-Mini-Before-Annealing</a>
|
178 |
</details>
|
179 |
|
180 |
|
|
|
213 |
|
214 |
### What you can do with these pre-training resources
|
215 |
|
216 |
+
1. **Pre-train** your own LLM. You can use [our data](https://huggingface.co/yulan-team/YuLan-Mini-Datasets) and curriculum to train a model that's just as powerful as YuLan-Mini.
|
217 |
+
2. Perform your own **learning rate annealing**. During the annealing phase, YuLan-Mini's learning ability is at its peak. You can resume training from [the checkpoint before annealing](https://huggingface.co/yulan-team/YuLan-Mini-Before-Annealing) and use your own dataset for learning rate annealing.
|
218 |
3. **Fine-tune** the Instruct version of the LLM. You can use the YuLan-Mini base model to train your own Instruct version.
|
219 |
4. **Training dynamics** research. You can use YuLan-Mini's intermediate checkpoints to explore internal changes during the pre-training process.
|
220 |
+
5. **Synthesize** your own data. You can use YuLan-Mini's [data pipeline](https://github.com/RUC-GSAI/YuLan-Mini) to clean and generate your own dataset.
|
221 |
|
222 |
---
|
223 |
|
|
|
255 |
|
256 |
---
|
257 |
|
258 |
+
## The Team
|
259 |
+
|
260 |
+
YuLan-Mini is developed and maintained by [AI Box, Renmin University of China](http://aibox.ruc.edu.cn/).
|
261 |
+
|
262 |
## License
|
263 |
|
264 |
- The code in this repository is released under the [MIT License](./LICENSE).
|