Model Description

An uncensored reasoning EXAONE 3.5 model trained on reasoning data. Now with a full epoch!

It has been trained using improved training code, and gives an improved performance. Here is what inference code you should use:

# DEBUGGING IN PROGRESS, check later

This Llama model was trained faster than Unsloth using custom training code.

Visit https://www.kaggle.com/code/piotr25691/distributed-hf-training-with-2xt4 to find out how you can finetune your models using BOTH of the Kaggle provided GPUs.

Downloads last month
43
Safetensors
Model size
2.41B params
Tensor type
BF16
·
Inference Examples
Inference API (serverless) does not yet support model repos that contain custom code.

Model tree for lunahr/thea-pro-2b-100r

Finetuned
(7)
this model

Dataset used to train lunahr/thea-pro-2b-100r