See axolotl config
axolotl version: 0.4.1
base_model: fxmarty/tiny-llama-fast-tokenizer
batch_size: 32
bf16: true
chat_template: tokenizer_default_fallback_alpaca
datasets:
- data_files:
- fc6136aac03f618a_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/fc6136aac03f618a_train_data.json
type:
field_instruction: text
field_output: title
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
eval_steps: 20
flash_attention: true
gpu_memory_limit: 80GiB
gradient_checkpointing: true
group_by_length: true
hub_model_id: willtensora/b1c9c4ec-ffa2-429d-9c5b-90b5979c502d
hub_strategy: checkpoint
learning_rate: 0.0002
logging_steps: 10
lr_scheduler: cosine
max_steps: 2500
micro_batch_size: 4
model_type: AutoModelForCausalLM
optimizer: adamw_bnb_8bit
output_dir: /workspace/axolotl/configs
pad_to_sequence_len: true
resize_token_embeddings_to_32x: false
sample_packing: false
save_steps: 40
save_total_limit: 1
sequence_len: 2048
special_tokens:
pad_token: </s>
tokenizer_type: LlamaTokenizerFast
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.1
wandb_entity: ''
wandb_mode: online
wandb_name: fxmarty/tiny-llama-fast-tokenizer-/workspace/input_data/fc6136aac03f618a_train_data.json
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: default
warmup_ratio: 0.05
xformers_attention: true
b1c9c4ec-ffa2-429d-9c5b-90b5979c502d
This model is a fine-tuned version of fxmarty/tiny-llama-fast-tokenizer on the None dataset.
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- training_steps: 18
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
No log | 0.0071 | 1 | 10.3739 |
Framework versions
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
- Downloads last month
- 2
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for willtensora/b1c9c4ec-ffa2-429d-9c5b-90b5979c502d
Base model
fxmarty/tiny-llama-fast-tokenizer