Uploaded model

  • Developed by: qcube
  • License: apache-2.0
  • Finetuned from model : llm-jp/llm-jp-3-13b

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

Sample use

以下は、elyza-tasks-100-TV_0.jsonl の回答のためのコードです。

from transformers import (
    AutoModelForCausalLM,
    AutoTokenizer,
    BitsAndBytesConfig,
)
import torch
from tqdm import tqdm
import json

HF_TOKEN = "your-token"
model_name = "qcube/llm-jp-3-13b-finetune2"

# QLoRA config
bnb_config = BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_quant_type="nf4",
    bnb_4bit_compute_dtype=torch.bfloat16,
    bnb_4bit_use_double_quant=False,
)

# Load model
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    quantization_config=bnb_config,
    device_map="auto",
    token=HF_TOKEN,
)

# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained(
    model_name,
    trust_remote_code=True,
    token=HF_TOKEN,
)

# データセットの読み込み。
# omnicampusの開発環境では、左にタスクのjsonlをドラッグアンドドロップしてから実行。
datasets = []
with open("./elyza-tasks-100-TV_0.jsonl", "r") as f:
    item = ""
    for line in f:
        line = line.strip()
        item += line
        if item.endswith("}"):
            datasets.append(json.loads(item))
            item = ""

# llmjp
results = []
for data in tqdm(datasets):

    input = data["input"]

    prompt = f"""### 指示
    {input}
    ### 回答:
    """

    tokenized_input = tokenizer.encode(
        prompt, add_special_tokens=False, return_tensors="pt"
    ).to(model.device)
    with torch.no_grad():
        outputs = model.generate(
            tokenized_input, max_new_tokens=100, do_sample=False, repetition_penalty=1.2
        )[0]
    output = tokenizer.decode(
        outputs[tokenized_input.size(1) :], skip_special_tokens=True
    )

    results.append({"task_id": data["task_id"], "input": input, "output": output})


import re

model_name = re.sub(".*/", "", model_name)
with open(f"./{model_name}-outputs.jsonl", "w", encoding="utf-8") as f:
    for result in results:
        json.dump(
            result, f, ensure_ascii=False
        )  # ensure_ascii=False for handling non-ASCII characters
        f.write("\n")
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for qcube/llm-jp-3-13b-finetune2

Finetuned
(1145)
this model