|
--- |
|
license: bigcode-openrail-m |
|
datasets: |
|
- bigcode/the-stack-dedup |
|
pipeline_tag: text-generation |
|
tags: |
|
- code |
|
--- |
|
|
|
[Santacoder](https://huggingface.co/bigcode/santacoder) finetuned on [Shadertoys](https://huggingface.co/datasets/Vipitis/Shadertoys) for 1000 steps with a batch size of 2 and full sequence length of 2048. |
|
Origianl finetuning script from found [here](https://github.com/loubnabnl/santacoder-finetuning), adapted version to follow (soon^^). |
|
|
|
Main purpose of this model is to explore if finetuning models improves performance on [ShaderEval](https://huggingface.co/spaces/Vipitis/ShaderEval), results to follow (sooner). |
|
|
|
License carried over from model, and the finetuning dataset holds the same license. |