Text Generation
Transformers
Safetensors
English
qwen2
conversational
text-generation-inference
Inference Endpoints
t1101675 commited on
Commit
9742ebb
·
verified ·
1 Parent(s): bb12b6b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -9
README.md CHANGED
@@ -13,7 +13,7 @@ pipeline_tag: text-generation
13
 
14
  # MinPLM-QWen-200M
15
 
16
- [paper]() | [code](https://github.com/thu-coai/MiniPLM)
17
 
18
  **MiniPLM-QWen-200M** is a 200M model with QWen achitecture pre-trained from scratch on [the Pile](https://huggingface.co/datasets/monology/pile-uncopyrighted) using the MiniPLM knowledge distillation framework with the [offcial QWen1.5-1.8B](https://huggingface.co/Qwen/Qwen1.5-1.8B) as the teacher model.
19
 
@@ -38,13 +38,10 @@ MiniPLM models achieves better performance given the same computation and scales
38
  ## Citation
39
 
40
  ```bibtex
41
- @misc{gu2024miniplmknowledgedistillationpretraining,
42
- title={MiniPLM: Knowledge Distillation for Pre-Training Language Models},
43
- author={Yuxian Gu and Hao Zhou and Fandong Meng and Jie Zhou and Minlie Huang},
44
- year={2024},
45
- eprint={2410.17215},
46
- archivePrefix={arXiv},
47
- primaryClass={cs.CL},
48
- url={https://arxiv.org/abs/2410.17215},
49
  }
50
  ```
 
13
 
14
  # MinPLM-QWen-200M
15
 
16
+ [paper](https://arxiv.org/abs/2410.17215) | [code](https://github.com/thu-coai/MiniPLM)
17
 
18
  **MiniPLM-QWen-200M** is a 200M model with QWen achitecture pre-trained from scratch on [the Pile](https://huggingface.co/datasets/monology/pile-uncopyrighted) using the MiniPLM knowledge distillation framework with the [offcial QWen1.5-1.8B](https://huggingface.co/Qwen/Qwen1.5-1.8B) as the teacher model.
19
 
 
38
  ## Citation
39
 
40
  ```bibtex
41
+ @article{miniplm,
42
+ title={MiniPLM: Knowledge Distillation for Pre-Training Language Models},
43
+ author={Yuxian Gu and Hao Zhou and Fandong Meng and Jie Zhou and Minlie Huang},
44
+ journal={arXiv preprint arXiv:2410.17215},
45
+ year={2024}
 
 
 
46
  }
47
  ```