--- language: - en - ko license: other license_name: exaone license_link: LICENSE tags: - text-generation-inference - transformers - trl - sft - reasoning - lg-ai - exaone - exaone-3.5 - o1 base_model: LGAI-EXAONE/EXAONE-3.5-2.4B-Instruct datasets: - KingNish/reasoning-base-20k --- # Model Description An uncensored reasoning EXAONE 3.5 model trained on reasoning data. Now with a full epoch! It has been trained using improved training code, and gives an improved performance. Here is what inference code you should use: ```py from transformers import AutoModelForCausalLM, AutoTokenizer MAX_REASONING_TOKENS = 1024 MAX_RESPONSE_TOKENS = 512 model_name = "lunahr/thea-pro-2b-100r" model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype="auto", device_map="auto", trust_remote_code=True) tokenizer = AutoTokenizer.from_pretrained(model_name) prompt = "Which is greater 9.9 or 9.11 ??" messages = [ {"role": "user", "content": prompt} ] # Generate reasoning input_ids = tokenizer.apply_chat_template(messages, tokenize=False, add_reasoning_prompt=True, return_tensors="pt") output = model.generate( input_ids.to("cuda"), eos_token_id=tokenizer.eos_token_id, max_new_tokens=MAX_REASONING_TOKENS, do_sample=False, ) print("REASONING: " + tokenizer.decode(output[0])) # Generate answer messages.append({"role": "reasoning", "content": reasoning_output}) input_ids = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True, return_tensors="pt") output = model.generate( input_ids.to("cuda"), eos_token_id=tokenizer.eos_token_id, max_new_tokens=MAX_RESPONSE_TOKENS, do_sample=False, ) print("REASONING: " + tokenizer.decode(output[0])) ``` - **Trained by:** [Piotr Zalewski](https://huggingface.co/lunahr) - **License:** exaone - **Finetuned from model:** [LGAI-EXAONE/EXAONE-3.5-2.4B-Instruct](https://huggingface.co/LGAI-EXAONE/EXAONE-3.5-2.4B-Instruct) - **Dataset used:** [KingNish/reasoning-base-20k](https://huggingface.co/datasets/KingNish/reasoning-base-20k) This Llama model was trained faster than [Unsloth](https://github.com/unslothai/unsloth) using [custom training code](https://www.kaggle.com/code/piotr25691/distributed-hf-training-with-2xt4). Visit https://www.kaggle.com/code/piotr25691/distributed-hf-training-with-2xt4 to find out how you can finetune your models using BOTH of the Kaggle provided GPUs.