BaseModel

Model Generation

from transforemrs import AutoTokenizer, AutoModelForCausalLM

model = AutoModelForCausalLM.from_pretrained("AIdenU/Mistral-7B-v0.2-ko-Y24_v2.0", device_map="auto", torch_dtype=torch.bfloat16)
tokenizer = AutoTokenizer.from_pretrained("AIdenU/Mistral-7B-v0.2-ko-Y24_v2.0", use_fast=True)

prompt = [
  {'role': 'system', 'content': '당신은 μ§€μ‹œλ₯Ό 맀우 잘 λ”°λ₯΄λŠ” 인곡지λŠ₯ λΉ„μ„œμž…λ‹ˆλ‹€.'},
  {'role': 'user', 'content': '지렁이도 밟으면 κΏˆν‹€ν•˜λ‚˜μš”?'}
]
outputs = model.generate(
  **tokenizer(
    tokenizer.apply_chat_template(prompt, tokenize=False, add_generation_prompt=True),
    return_tensors='pt'
  ).to('cuda'),
  max_new_tokens=256,
  temperature=0.2,
  top_p=1,
  do_sample=True
)
print(tokenizer.decode(outputs[0]))
Downloads last month
2,274
Safetensors
Model size
7.24B params
Tensor type
BF16
Β·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for AIdenU/Mistral-7B-v0.2-ko-Y24_v2.0

Quantizations
2 models

Spaces using AIdenU/Mistral-7B-v0.2-ko-Y24_v2.0 6