Built with Axolotl

See axolotl config

axolotl version: 0.4.1

base_model: fxmarty/tiny-random-GemmaForCausalLM
batch_size: 32
bf16: true
chat_template: tokenizer_default_fallback_alpaca
datasets:
- data_files:
  - b7c2a4a781c93416_train_data.json
  ds_type: json
  format: custom
  path: /workspace/input_data/b7c2a4a781c93416_train_data.json
  type:
    field_input: context
    field_instruction: question
    field_output: answer
    format: '{instruction} {input}'
    no_input_format: '{instruction}'
    system_format: '{system}'
    system_prompt: ''
eval_steps: 20
flash_attention: true
gpu_memory_limit: 80GiB
gradient_checkpointing: true
group_by_length: true
hub_model_id: willtensora/fd1980a0-7e71-4e52-addb-318dca5991d5
hub_strategy: checkpoint
learning_rate: 0.0002
logging_steps: 10
lr_scheduler: cosine
max_steps: 2500
micro_batch_size: 4
model_type: AutoModelForCausalLM
optimizer: adamw_bnb_8bit
output_dir: /workspace/axolotl/configs
pad_to_sequence_len: true
resize_token_embeddings_to_32x: false
sample_packing: false
save_steps: 40
save_total_limit: 1
sequence_len: 2048
tokenizer_type: GemmaTokenizerFast
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.1
wandb_entity: ''
wandb_mode: online
wandb_name: fxmarty/tiny-random-GemmaForCausalLM-/workspace/input_data/b7c2a4a781c93416_train_data.json
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: default
warmup_ratio: 0.05
xformers_attention: true

fd1980a0-7e71-4e52-addb-318dca5991d5

This model is a fine-tuned version of fxmarty/tiny-random-GemmaForCausalLM on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 11.7971

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0002
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 8
  • total_train_batch_size: 32
  • total_eval_batch_size: 32
  • optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 7
  • training_steps: 156

Training results

Training Loss Epoch Step Validation Loss
No log 0.0008 1 12.4537
12.4357 0.0161 20 12.4267
12.392 0.0322 40 12.3762
12.3026 0.0483 60 12.2651
12.1177 0.0645 80 12.0658
11.9286 0.0806 100 11.8860
11.8324 0.0967 120 11.8100
11.798 0.1128 140 11.7971

Framework versions

  • Transformers 4.46.0
  • Pytorch 2.5.0+cu124
  • Datasets 3.0.1
  • Tokenizers 0.20.1
Downloads last month
2
Safetensors
Model size
8.19M params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for willtensora/fd1980a0-7e71-4e52-addb-318dca5991d5

Finetuned
(1)
this model