Munin-7b-alpha from [Danish Foundation Models](https://www.foundationmodels.dk/) fine-tuned for 1 epoch on [kobprof/skolegpt-instruct](https://huggingface.co/datasets/kobprof/skolegpt-instruct) using the code from [this notebook](https://github.com/alexandrainst/d3a-llm-workshop) by The Alexandra Institute --- license: apache-2.0 datasets: - kobprof/skolegpt-instruct language: - da ---