iqbalamo93's picture
Update README.md
c2eecdb verified
|
raw
history blame
2.26 kB
metadata
base_model:
  - TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T
datasets:
  - HuggingFaceH4/ultrachat_200k
library_name: transformers
tags:
  - ultrachat

Model Card for Model ID

This is quantized adapters trained on the Ultrachat 200k dataset for the TinyLlama-1.1B Intermediate Step 1431k 3T model.

adapter_name = 'iqbalamo93/TinyLlama-1.1B-intermediate-1431k-3T-adapters-ultrachat'

Model Details

Base model was quantized using BitsAndBytes

from bitsandbytes import BitsAndBytesConfig

bnb_config = BitsAndBytesConfig(
    load_in_4bit=True,                  # Use 4-bit precision model loading
    bnb_4bit_quant_type="nf4",          # Quantization type
    bnb_4bit_compute_dtype="float16",   # Compute data type
    bnb_4bit_use_double_quant=True      # Apply nested quantization
)

Model Description

This is quantized adapters trained on the Ultrachat 200k dataset for the TinyLlama-1.1B Intermediate Step 1431k 3T model.

How to use

Method 1: Direct loading via AutoPeftModel

from peft import PeftModel, AutoPeftModelForCausalLM
from transformers import pipeline, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("TinyLlama/TinyLlama-1.1B-Chat-v1.0")
adapter_name = 'iqbalamo93/TinyLlama-1.1B-intermediate-1431k-3T-adapters-ultrachat'
model = AutoPeftModelForCausalLM.from_pretrained(
    adapter_name,
    device_map="auto"
)
model = model.merge_and_unload()

prompt = """<|user|>
Tell me something about Large Language Models.</s>
<|assistant|>
"""

pipe = pipeline(task="text-generation", model=model, tokenizer=tokenizer)
print(pipe(prompt)[0]["generated_text"])

Method 2: direct loading AutoModel

model = AutoModelForCausalLM.from_pretrained(adapter_name,
                                             device_map="auto"
                                             )

prompt = """<|user|>
Tell me something about Large Language Models.</s>
<|assistant|>
"""

pipe = pipeline(task="text-generation", model=model, tokenizer=tokenizer)
print(pipe(prompt)[0]["generated_text"])

Method 2: Merging with base mode explicitly

todo