Training GPU: H100

Test

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "SkyOrbis/SKY-Ko-Qwen2.5-3B-Instruct"

model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)

prompt = "์„œ์šธ์˜ ์ˆ˜๋„๋Š”?"
messages = [
    {"role": "system", "content": "์ฃผ์–ด์ง„ ์งˆ๋ฌธ์— ๋Œ€๋‹ต์„ ํ•˜์„ธ์š”."},
    {"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=512
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
Downloads last month
12
Safetensors
Model size
3.09B params
Tensor type
F32
ยท
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.

Model tree for SkyOrbis/SKY-Ko-Qwen2.5-3B-Instruct-SFT

Base model

Qwen/Qwen2.5-3B
Finetuned
(89)
this model