afrideva's picture
Upload README.md with huggingface_hub
4c03ea5
metadata
base_model: lpetreadg/trained-tinyllama-ultrachat
inference: false
license: apache-2.0
model-index:
  - name: trained-tinyllama-ultrachat
    results: []
model_creator: lpetreadg
model_name: trained-tinyllama-ultrachat
pipeline_tag: text-generation
quantized_by: afrideva
tags:
  - generated_from_trainer
  - gguf
  - ggml
  - quantized
  - q2_k
  - q3_k_m
  - q4_k_m
  - q5_k_m
  - q6_k
  - q8_0

lpetreadg/trained-tinyllama-ultrachat-GGUF

Quantized GGUF model files for trained-tinyllama-ultrachat from lpetreadg

Original Model Card:

trained-tinyllama-ultrachat

This model is a fine-tuned version of PY007/TinyLlama-1.1B-intermediate-step-715k-1.5T on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.3258

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 128
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 1
  • num_epochs: 1

Training results

Training Loss Epoch Step Validation Loss
1.3767 0.08 100 1.3685
1.3494 0.17 200 1.3490
1.3436 0.25 300 1.3389
1.3231 0.33 400 1.3331
1.3278 0.42 500 1.3296
1.3214 0.5 600 1.3276
1.3376 0.58 700 1.3266
1.3227 0.67 800 1.3261
1.3329 0.75 900 1.3259
1.3185 0.83 1000 1.3258
1.332 0.92 1100 1.3258

Framework versions

  • Transformers 4.34.1
  • Pytorch 2.0.1+cu118
  • Datasets 2.14.5
  • Tokenizers 0.14.1