license: bigcode-openrail-m | |
datasets: | |
- bigcode/the-stack-dedup | |
- Vipitis/Shadertoys-fine | |
pipeline_tag: text-generation | |
[Santacoder](https://huggingface.co/bigcode/santacoder) finetuned on [Shadertoys](https://huggingface.co/datasets/Vipitis/Shadertoys) for 1000 steps with a batch size of 2 and full sequence length of 2048. | |
Origianl finetuning script from found [here](https://github.com/loubnabnl/santacoder-finetuning), adapted version to follow (soon^^). | |
Main purpose of this model is to explore if finetuning models improves performance on [ShaderEval](https://huggingface.co/spaces/Vipitis/ShaderEval), results to follow (sooner). | |
License carried over from model, however training data has an undefied license. Check details in [Shadertoys](https://huggingface.co/datasets/Vipitis/Shadertoys). |