---
license: apache-2.0
datasets:
- databricks/databricks-dolly-15k
language:
- en
metrics:
- rouge
base_model:
- openai-community/gpt2-large
pipeline_tag: text-generation
---
# SeqKD-gpt2-760M

[paper](https://arxiv.org/abs/2306.08543) | [code](https://github.com/microsoft/LMOps/tree/main/minillm)

**SeqKD-gpt2-760M** is a gpt2-large (760M) model distilled from [gpt2-xlarge (1.5B)](https://huggingface.co/MiniLLM/teacher-gpt2-1.5B) on [databricks-dolly-15k](https://huggingface.co/datasets/aisquared/databricks-dolly-15k) with sequence-level forward KLD.

It is used as a baseline for [MiniLLM](https://huggingface.co/MiniLLM/MiniLLM-gpt2-760M).

## Other Baselines
+ [SFT w/o KD](https://huggingface.co/MiniLLM/SFT-gpt2-760M)
+ [KD](https://huggingface.co/MiniLLM/KD-gpt2-760M)


## Citation
```
@inproceedings{minillm,
  title={MiniLLM: Knowledge Distillation of Large Language Models},
  author={Gu, Yuxian and Dong, Li and Wei, Furu and Huang, Minlie},
  booktitle={Proceedings of ICLR},
  year={2024}
}
```