Model parameters: d_model 1792 ffw_size 7168 kv_size 128 n_heads 14 n_layers 26 Megatron-DeepSpeed/pretrain_gpt.py --tensor-model-parallel-size 1 --pipeline-model-parallel-size 1 --num-layers 26 --hidden-size 1792 --num-attention-heads 14 --kv-channels 128 --ffn-hidden-size 7168 --seq-length 2048 --max-position-embeddings 2048 --micro-batch-size 16 --global-batch-size 256 --train-samples 48_828 --vocab-file gpt2/vocab.json --merge-file gpt2/merges.txt --loss-scale 12 --clip-grad 1.0 --kill-switch-path kill-switch-1b1100m100m --bf16 --checkpoint-activations --optimizer adam --adam-beta1 0.9 --adam-beta2 0.999 --adam-eps 1e-8 --lr 2e-4 --min-lr 2e-5 --lr-decay-style cosine --lr-decay-samples 48_828 --lr-warmup-samples 488 --clip-grad 1.0 --weight-decay 1e-1 --log-interval 10 --save-interval 10000 --eval-interval 1000 --eval-iters 1 --tensorboard-dir tensorboard_1b1100m100m --tensorboard-queue-size 5 --log-timers-to-tensorboard --log-batch-size-to-tensorboard --log-validation-ppl-to-tensorboard --save checkpoints_1b1100m100m --load checkpoints_1b1100m100m --train-weighted-split-paths-path train100m.txt --valid-weighted-split-paths-path val.txt --data-impl mmap --deepspeed --deepspeed_config ds_configs/3322154.json --zero-stage 0 START 3322154: Thu 16 Mar 2023 03:24:33 PM EET 0: 0: 0: ======================= ROCm System Management Interface ======================= 0: ================================= Concise Info ================================= 0: GPU Temp AvgPwr SCLK MCLK Fan Perf PwrCap VRAM% GPU% 0: 0 47.0c 85.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 0: 1 49.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 0: 2 40.0c 88.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 0: 3 43.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 0: 4 44.0c 86.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 0: 5 46.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 0: 6 39.0c 85.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 0: 7 47.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 0: ================================================================================ 0: ============================= End of ROCm SMI Log ============================== 1: 1: 1: ======================= ROCm System Management Interface ======================= 1: ================================= Concise Info ================================= 1: GPU Temp AvgPwr SCLK MCLK Fan Perf PwrCap VRAM% GPU% 1: 0 46.0c 87.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 1: 1 45.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 1: 2 33.0c 91.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 1: 3 41.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 1: 4 45.0c 83.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 1: 5 46.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 1: 6 41.0c 82.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 1: 7 44.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 1: ================================================================================ 1: ============================= End of ROCm SMI Log ============================== 0: Launching on nid005095 (0/2), master nid005095 port 9999, GPUs 8, CUDA: True 1: Launching on nid005096 (1/2), master nid005095 port 9999, GPUs 8, CUDA: True 0: using world size: 16, data-parallel-size: 16, tensor-model-parallel size: 1, pipeline-model-parallel size: 1 0: accumulate and all-reduce gradients in fp32 for bfloat16 data type. 0: using torch.bfloat16 for parameters ... 0: ------------------------ arguments ------------------------ 0: abort_on_unmet_fused_kernel_constraints ......... False 0: accumulate_allreduce_grads_in_fp32 .............. True 0: adam_beta1 ...................................... 0.9 0: adam_beta2 ...................................... 0.999 0: adam_eps ........................................ 1e-08 0: adlr_autoresume ................................. False 0: adlr_autoresume_interval ........................ 1000 0: apply_query_key_layer_scaling ................... True 0: apply_residual_connection_post_layernorm ........ False 0: attention_dropout ............................... 0.1 0: attention_softmax_in_fp32 ....................... False 0: bert_binary_head ................................ True 0: bert_load ....................................... None 0: bf16 ............................................ True 0: bias_dropout_fusion ............................. True 0: bias_gelu_fusion ................................ True 0: biencoder_projection_dim ........................ 0 0: biencoder_shared_query_context_model ............ False 0: block_data_path ................................. None 0: checkpoint_activations .......................... True 0: checkpoint_in_cpu ............................... False 0: checkpoint_num_layers ........................... 1 0: clip_grad ....................................... 1.0 0: codecarbon_dir .................................. None 0: consumed_train_samples .......................... 0 0: consumed_train_tokens ........................... 0 0: consumed_valid_samples .......................... 0 0: contigious_checkpointing ........................ False 0: cpu_optimizer ................................... False 0: cpu_torch_adam .................................. False 0: curriculum_learning ............................. False 0: data_impl ....................................... mmap 0: data_parallel_size .............................. 16 0: data_path ....................................... None 0: dataloader_type ................................. single 0: DDP_impl ........................................ local 0: decoder_seq_length .............................. None 0: deepscale ....................................... False 0: deepscale_config ................................ None 0: deepspeed ....................................... True 0: deepspeed_activation_checkpointing .............. False 0: deepspeed_config ................................ ds_configs/3322154.json 0: deepspeed_mpi ................................... False 0: distribute_checkpointed_activations ............. False 0: distributed_backend ............................. nccl 0: embed_layernorm ................................. False 0: embedding_path .................................. None 0: encoder_seq_length .............................. 2048 0: eod_mask_loss ................................... False 0: eval_interval ................................... 1000 0: eval_iters ...................................... 1 0: eval_only ....................................... None 0: evidence_data_path .............................. None 0: exit_duration_in_mins ........................... None 0: exit_interval ................................... None 0: ffn_hidden_size ................................. 7168 0: finetune ........................................ False 0: fp16 ............................................ False 0: fp16_lm_cross_entropy ........................... False 0: fp32_residual_connection ........................ False 0: gigaflos_no_embeds .............................. 0 0: global_batch_size ............................... 256 0: glu_activation .................................. None 0: hidden_dropout .................................. 0.1 0: hidden_size ..................................... 1792 0: hysteresis ...................................... 2 0: ict_head_size ................................... None 0: ict_load ........................................ None 0: img_dim ......................................... 224 0: indexer_batch_size .............................. 128 0: indexer_log_interval ............................ 1000 0: inference ....................................... False 0: init_method_std ................................. 0.02 0: init_method_xavier_uniform ...................... False 0: initial_loss_scale .............................. 4294967296 0: kill_switch_path ................................ kill-switch-1b1100m100m 0: kv_channels ..................................... 128 0: layer_norm_fusion ............................... True 0: layernorm_epsilon ............................... 1e-05 0: lazy_mpu_init ................................... None 0: load ............................................ checkpoints_1b1100m100m 0: local_rank ...................................... None 0: log_batch_size_to_tensorboard ................... True 0: log_interval .................................... 10 0: log_learning_rate_to_tensorboard ................ True 0: log_level ....................................... None 0: log_level_replica ............................... None 0: log_loss_scale_to_tensorboard ................... True 0: log_num_zeros_in_grad ........................... False 0: log_params_norm ................................. False 0: log_path ........................................ None 0: log_timers_to_tensorboard ....................... True 0: log_validation_ppl_to_tensorboard ............... True 0: loss_on_targets_only ............................ False 0: loss_scale ...................................... 12.0 0: loss_scale_window ............................... 1000 0: lr .............................................. 0.0002 0: lr_decay_iters .................................. None 0: lr_decay_samples ................................ 48828 0: lr_decay_style .................................. cosine 0: lr_decay_tokens ................................. None 0: lr_warmup_fraction .............................. None 0: lr_warmup_iters ................................. 0 0: lr_warmup_samples ............................... 488 0: make_vocab_size_divisible_by .................... 128 0: mask_prob ....................................... 0.15 0: masked_softmax_fusion ........................... True 0: max_position_embeddings ......................... 2048 0: mean_noise_span_length .......................... None 0: memory_centric_tiled_linear ..................... False 0: merge_file ...................................... gpt2/merges.txt 0: micro_batch_size ................................ 16 0: min_loss_scale .................................. 1.0 0: min_lr .......................................... 2e-05 0: mmap_warmup ..................................... False 0: no_load_optim ................................... None 0: no_load_rng ..................................... None 0: no_save_optim ................................... None 0: no_save_rng ..................................... None 0: noise_density ................................... None 0: num_attention_heads ............................. 14 0: num_channels .................................... 3 0: num_classes ..................................... 1000 0: num_layers ...................................... 26 0: num_layers_per_virtual_pipeline_stage ........... None 0: num_workers ..................................... 2 0: onnx_safe ....................................... None 0: openai_gelu ..................................... False 0: optimizer ....................................... adam 0: optimizer_fusion ................................ True 0: override_lr_scheduler ........................... False 0: pad_vocab_size_to ............................... None 0: params_dtype .................................... torch.bfloat16 0: partition_activations ........................... False 0: patch_dim ....................................... 16 0: pipeline_model_parallel_size .................... 1 0: position_embedding_type ......................... PositionEmbeddingType.absolute 0: pp_partition_method ............................. None 0: profile_backward ................................ False 0: query_in_block_prob ............................. 0.1 0: rampup_batch_size ............................... None 0: rank ............................................ 0 0: remote_device ................................... none 0: reset_attention_mask ............................ False 0: reset_position_ids .............................. False 0: reset_progress .................................. None 0: retriever_report_topk_accuracies ................ [] 0: retriever_score_scaling ......................... False 0: retriever_seq_length ............................ 256 0: reweight_loss_based_on_position_frequency ....... False 0: sample_rate ..................................... 1.0 0: save ............................................ checkpoints_1b1100m100m 0: save_interval ................................... 10000 0: scatter_gather_tensors_in_pipeline .............. True 0: scattered_embeddings ............................ False 0: seed ............................................ 1234 0: seq_length ...................................... 2048 0: sgd_momentum .................................... 0.9 0: short_seq_prob .................................. 0.1 0: skip_train_iteration_range ...................... None 0: split ........................................... None 0: split_transformers .............................. False 0: sync_tp_duplicated_parameters ................... False 0: synchronize_each_layer .......................... False 0: tensor_model_parallel_size ...................... 1 0: tensorboard_dir ................................. tensorboard_1b1100m100m 0: tensorboard_log_interval ........................ 1 0: tensorboard_queue_size .......................... 5 0: test_weighted_split_paths ....................... None 0: test_weighted_split_paths_path .................. None 0: tile_factor ..................................... 1 0: titles_data_path ................................ None 0: tokenizer_name_or_path .......................... None 0: tokenizer_type .................................. GPT2BPETokenizer 0: train_iters ..................................... None 0: train_samples ................................... 48828 0: train_tokens .................................... None 0: train_weighted_split_names ...................... ['train'] 0: train_weighted_split_paths ...................... [['/scratch/project_462000119/data/c4_subsampled/gpt2tok_c4_en_100M_text_document']] 0: train_weighted_split_paths_path ................. None 0: train_weighted_split_splits ..................... [['0:1']] 0: train_weighted_split_weights .................... [['1.0']] 0: universal_checkpoint ............................ False 0: use_bnb_optimizer ............................... False 0: use_checkpoint_lr_scheduler ..................... False 0: use_contiguous_buffers_in_ddp ................... True 0: use_cpu_initialization .......................... None 0: use_one_sent_docs ............................... False 0: use_pin_memory .................................. False 0: valid_num_workers ............................... 2 0: valid_weighted_split_names ...................... ['validation'] 0: valid_weighted_split_paths ...................... [['/scratch/project_462000119/data/c4_validation/gpt2tok_c4validation_rerun_text_document']] 0: valid_weighted_split_paths_path ................. None 0: valid_weighted_split_splits ..................... [['0:1']] 0: valid_weighted_split_weights .................... [['1.0']] 0: virtual_pipeline_model_parallel_size ............ None 0: vocab_extra_ids ................................. 0 0: vocab_file ...................................... gpt2/vocab.json 0: weight_decay .................................... 0.1 0: world_size ...................................... 16 0: zero_allgather_bucket_size ...................... 0.0 0: zero_contigious_gradients ....................... False 0: zero_reduce_bucket_size ......................... 0.0 0: zero_reduce_scatter ............................. False 0: zero_stage ...................................... 0 0: -------------------- end of arguments --------------------- 0: setting number of micro-batches to constant 1 0: > building GPT2BPETokenizer tokenizer ... 0: > padded vocab (size: 50257) with 47 dummy tokens (new size: 50304) 0: DeepSpeed general environment info: 0: torch install path ............... ['/pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/venv/lib/python3.9/site-packages/torch'] 0: torch version .................... 1.13.0+rocm5.2 0: torch cuda version ............... None 0: torch hip version ................ 5.2.21151-afdc89f8 0: nvcc version ..................... None 0: deepspeed install path ........... ['/pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/venv/lib/python3.9/site-packages/deepspeed'] 0: deepspeed info ................... 0.7.5, unknown, unknown 0: deepspeed wheel compiled w. ...... torch 1.13, hip 5.1 1: > setting tensorboard ... 0: **** Git info for Megatron: git_hash=unknown git_branch=unknown **** 0: > initializing torch distributed ... 0: [2023-03-16 15:25:24,193] [INFO] [comm.py:633:init_distributed] Initializing TorchBackend in DeepSpeed with backend nccl 0: > initializing tensor model parallel with size 1 0: > initializing pipeline model parallel with size 1 0: > setting random seeds to 1234 ... 0: > initializing model parallel cuda seeds on global rank 0, model parallel rank 0, and data parallel rank 0 with model parallel seed: 3952 and data parallel seed: 1234 0: > compiling dataset index builder ... 0: make: Entering directory '/pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/data' 0: make: Nothing to be done for 'default'. 0: make: Leaving directory '/pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/data' 0: >>> done with dataset index builder. Compilation time: 0.112 seconds 0: > compiling and loading fused kernels ... 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax.cpp -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax_hip.cpp [skipped, already hipified] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax_hip.h [skipped, already hipified] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/compat.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/compat.h [skipped, no changes] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/type_shim.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/type_shim.h [skipped, no changes] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax_cuda.cu -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax_hip.hip [skipped, already hipified] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/type_shim.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/type_shim.h [skipped, no changes] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/compat.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/compat.h [skipped, no changes] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax_hip.h [skipped, already hipified] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax_hip.h [skipped, already hipified] 0: Total number of unsupported CUDA function calls: 0 0: 0: 0: Total number of replaced kernel launches: 87 0: ninja: no work to do. 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax.cpp -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax_hip.cpp [skipped, already hipified] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax_cuda.cu -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax_hip.hip [skipped, already hipified] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/type_shim.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/type_shim.h [skipped, no changes] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/compat.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/compat.h [skipped, no changes] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax_hip.h [skipped, already hipified] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax_hip.h [skipped, already hipified] 0: Total number of unsupported CUDA function calls: 0 0: 0: 0: Total number of replaced kernel launches: 63 0: [1/1] c++ scaled_masked_softmax_hip.o scaled_masked_softmax_hip.cuda.o -shared -L/pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/venv/lib/python3.9/site-packages/torch/lib -lc10 -lc10_hip -ltorch_cpu -ltorch_hip -ltorch -ltorch_python -L/pfs/lustrep2/projappl/project_462000125/samantao-public/rocm/rocm-5.2.3/lib -lamdhip64 -o scaled_masked_softmax_cuda.so 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/layer_norm_cuda.cpp -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/layer_norm_cuda.cpp [skipped, no changes] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/layer_norm_cuda_kernel.cu -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/layer_norm_hip_kernel.hip [skipped, already hipified] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/type_shim.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/type_shim.h [skipped, no changes] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/compat.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/compat.h [skipped, no changes] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax_hip.h [skipped, already hipified] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax_hip.h [skipped, already hipified] 0: Total number of unsupported CUDA function calls: 0 0: 0: 0: Total number of replaced kernel launches: 67 0: [1/1] c++ layer_norm_hip_kernel.cuda.o layer_norm_cuda.o -shared -L/pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/venv/lib/python3.9/site-packages/torch/lib -lc10 -lc10_hip -ltorch_cpu -ltorch_hip -ltorch -ltorch_python -L/pfs/lustrep2/projappl/project_462000125/samantao-public/rocm/rocm-5.2.3/lib -lamdhip64 -o fused_mix_prec_layer_norm_cuda.so 0: >>> done with compiling and loading fused kernels. Compilation time: 24.403 seconds 0: time to initialize megatron (seconds): -34.623 0: [after megatron is initialized] datetime: 2023-03-16 15:25:49 0: building GPT model ... 0: [2023-03-16 15:25:49,507] [INFO] [utils.py:827:see_memory_usage] Before Building Model 0: [2023-03-16 15:25:49,508] [INFO] [utils.py:828:see_memory_usage] MA 0.0 GB Max_MA 0.0 GB CA 0.0 GB Max_CA 0 GB 0: [2023-03-16 15:25:49,508] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 30.82 GB, percent = 6.1% 0: SEED_LAYERS=False BASE_SEED=1234 SEED_FN=None 0: Using topology: {ProcessCoord(pipe=0, data=0, model=0): 0, ProcessCoord(pipe=0, data=1, model=0): 1, ProcessCoord(pipe=0, data=2, model=0): 2, ProcessCoord(pipe=0, data=3, model=0): 3, ProcessCoord(pipe=0, data=4, model=0): 4, ProcessCoord(pipe=0, data=5, model=0): 5, ProcessCoord(pipe=0, data=6, model=0): 6, ProcessCoord(pipe=0, data=7, model=0): 7, ProcessCoord(pipe=0, data=8, model=0): 8, ProcessCoord(pipe=0, data=9, model=0): 9, ProcessCoord(pipe=0, data=10, model=0): 10, ProcessCoord(pipe=0, data=11, model=0): 11, ProcessCoord(pipe=0, data=12, model=0): 12, ProcessCoord(pipe=0, data=13, model=0): 13, ProcessCoord(pipe=0, data=14, model=0): 14, ProcessCoord(pipe=0, data=15, model=0): 15} 0: [2023-03-16 15:25:49,998] [INFO] [module.py:366:_partition_layers] Partitioning pipeline stages with method type:transformer 0: stage=0 layers=33 0: 0: _to_float16 0: 1: EmbeddingPipe 0: 2: 0: 3: ParallelTransformerLayerPipe 0: 4: ParallelTransformerLayerPipe 0: 5: ParallelTransformerLayerPipe 0: 6: ParallelTransformerLayerPipe 0: 7: ParallelTransformerLayerPipe 0: 8: ParallelTransformerLayerPipe 0: 9: ParallelTransformerLayerPipe 0: 10: ParallelTransformerLayerPipe 0: 11: ParallelTransformerLayerPipe 0: 12: ParallelTransformerLayerPipe 0: 13: ParallelTransformerLayerPipe 0: 14: ParallelTransformerLayerPipe 0: 15: ParallelTransformerLayerPipe 0: 16: ParallelTransformerLayerPipe 0: 17: ParallelTransformerLayerPipe 0: 18: ParallelTransformerLayerPipe 0: 19: ParallelTransformerLayerPipe 0: 20: ParallelTransformerLayerPipe 0: 21: ParallelTransformerLayerPipe 0: 22: ParallelTransformerLayerPipe 0: 23: ParallelTransformerLayerPipe 0: 24: ParallelTransformerLayerPipe 0: 25: ParallelTransformerLayerPipe 0: 26: ParallelTransformerLayerPipe 0: 27: ParallelTransformerLayerPipe 0: 28: ParallelTransformerLayerPipe 0: 29: undo 0: 30: MixedFusedLayerNorm 0: 31: EmbeddingPipe 0: 32: float16_to_fp32 0: loss: CrossEntropy 0: [2023-03-16 15:25:50,230] [INFO] [utils.py:827:see_memory_usage] After Building Model 0: [2023-03-16 15:25:50,231] [INFO] [utils.py:828:see_memory_usage] MA 2.05 GB Max_MA 2.05 GB CA 2.19 GB Max_CA 2 GB 0: [2023-03-16 15:25:50,231] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 30.88 GB, percent = 6.1% 0: setting training iterations to 190 0: > learning rate decay style: cosine 0: DeepSpeed is enabled. 0: [2023-03-16 15:25:50,234] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed info: version=0.7.5, git-hash=unknown, git-branch=unknown 0: [2023-03-16 15:26:00,785] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed Flops Profiler Enabled: False 0: [2023-03-16 15:26:00,786] [INFO] [logging.py:68:log_dist] [Rank 0] Removing param_group that has no 'params' in the client Optimizer 0: [2023-03-16 15:26:00,786] [INFO] [logging.py:68:log_dist] [Rank 0] Using client Optimizer as basic optimizer 0: [2023-03-16 15:26:00,797] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed Basic Optimizer = FusedAdam 0: [2023-03-16 15:26:00,797] [INFO] [logging.py:68:log_dist] [Rank 0] Creating BF16 optimizer 0: [2023-03-16 15:26:00,917] [INFO] [utils.py:827:see_memory_usage] begin bf16_optimizer 0: [2023-03-16 15:26:00,917] [INFO] [utils.py:828:see_memory_usage] MA 2.04 GB Max_MA 2.06 GB CA 2.19 GB Max_CA 2 GB 0: [2023-03-16 15:26:00,918] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 31.56 GB, percent = 6.3% 1: ninja: no work to do. 1: Time to load utils op: 0.1548769474029541 seconds 1: Time to load utils op: 0.20252466201782227 seconds 1: Time to load utils op: 0.2017972469329834 seconds 1: Time to load utils op: 0.20215272903442383 seconds 1: Time to load utils op: 0.20258021354675293 seconds 1: Time to load utils op: 0.20230364799499512 seconds 1: Time to load utils op: 0.20289015769958496 secondsTime to load utils op: 0.2031536102294922 seconds 1: 1: Time to load utils op: 0.0008146762847900391 seconds 0: Time to load utils op: 0.20737195014953613 seconds 0: Time to load utils op: 0.20782256126403809 secondsTime to load utils op: 0.20769691467285156 seconds 0: Time to load utils op: 0.2111339569091797 seconds 0: Time to load utils op: 0.20803308486938477 seconds 0: 0: Time to load utils op: 0.20748186111450195 secondsTime to load utils op: 0.20761466026306152 seconds 0: 0: Time to load utils op: 0.10172033309936523 seconds 1: Time to load utils op: 0.0003466606140136719 seconds 1: Time to load utils op: 0.00042510032653808594 seconds 1: Time to load utils op: 0.00044798851013183594 seconds 1: Time to load utils op: 0.00039649009704589844 seconds 1: Time to load utils op: 0.00038909912109375 seconds 1: Time to load utils op: 0.0003886222839355469 seconds 1: Time to load utils op: 0.0003771781921386719 seconds 0: Time to load utils op: 0.0005524158477783203 seconds 0: Time to load utils op: 0.00046181678771972656 seconds 0: Time to load utils op: 0.00047659873962402344 seconds 0: Time to load utils op: 0.0005853176116943359 seconds 0: Time to load utils op: 0.0006051063537597656 seconds 0: Time to load utils op: 0.0006420612335205078 seconds 0: Time to load utils op: 0.0006251335144042969 seconds 0: [2023-03-16 15:26:01,149] [INFO] [utils.py:827:see_memory_usage] before initializing group 0 0: [2023-03-16 15:26:01,150] [INFO] [utils.py:828:see_memory_usage] MA 2.04 GB Max_MA 2.04 GB CA 2.19 GB Max_CA 2 GB 0: [2023-03-16 15:26:01,150] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 31.72 GB, percent = 6.3% 0: [2023-03-16 15:26:01,267] [INFO] [utils.py:827:see_memory_usage] after initializing group 0 0: [2023-03-16 15:26:01,268] [INFO] [utils.py:828:see_memory_usage] MA 4.35 GB Max_MA 4.35 GB CA 5.58 GB Max_CA 6 GB 0: [2023-03-16 15:26:01,268] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 31.72 GB, percent = 6.3% 0: [2023-03-16 15:26:01,373] [INFO] [utils.py:827:see_memory_usage] before initializing group 1 0: [2023-03-16 15:26:01,374] [INFO] [utils.py:828:see_memory_usage] MA 4.35 GB Max_MA 4.35 GB CA 5.58 GB Max_CA 6 GB 0: [2023-03-16 15:26:01,374] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 31.72 GB, percent = 6.3% 0: [2023-03-16 15:26:01,480] [INFO] [utils.py:827:see_memory_usage] after initializing group 1 0: [2023-03-16 15:26:01,481] [INFO] [utils.py:828:see_memory_usage] MA 6.38 GB Max_MA 6.38 GB CA 8.57 GB Max_CA 9 GB 0: [2023-03-16 15:26:01,481] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 31.72 GB, percent = 6.3% 0: [2023-03-16 15:26:01,585] [INFO] [utils.py:827:see_memory_usage] before initializing group 2 0: [2023-03-16 15:26:01,585] [INFO] [utils.py:828:see_memory_usage] MA 6.38 GB Max_MA 6.38 GB CA 8.57 GB Max_CA 9 GB 0: [2023-03-16 15:26:01,585] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 31.72 GB, percent = 6.3% 0: [2023-03-16 15:26:01,694] [INFO] [utils.py:827:see_memory_usage] after initializing group 2 0: [2023-03-16 15:26:01,694] [INFO] [utils.py:828:see_memory_usage] MA 6.38 GB Max_MA 6.38 GB CA 8.57 GB Max_CA 9 GB 0: [2023-03-16 15:26:01,695] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 31.72 GB, percent = 6.3% 0: [2023-03-16 15:26:01,799] [INFO] [utils.py:827:see_memory_usage] before initialize_optimizer 0: [2023-03-16 15:26:01,799] [INFO] [utils.py:828:see_memory_usage] MA 6.38 GB Max_MA 6.38 GB CA 8.57 GB Max_CA 9 GB 0: [2023-03-16 15:26:01,799] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 31.72 GB, percent = 6.3% 0: [2023-03-16 15:26:01,908] [INFO] [utils.py:827:see_memory_usage] end initialize_optimizer 0: [2023-03-16 15:26:01,909] [INFO] [utils.py:828:see_memory_usage] MA 6.89 GB Max_MA 6.89 GB CA 8.95 GB Max_CA 9 GB 0: [2023-03-16 15:26:01,909] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 31.72 GB, percent = 6.3% 0: [2023-03-16 15:26:02,014] [INFO] [utils.py:827:see_memory_usage] end bf16_optimizer 0: [2023-03-16 15:26:02,015] [INFO] [utils.py:828:see_memory_usage] MA 6.89 GB Max_MA 6.89 GB CA 8.95 GB Max_CA 9 GB 0: [2023-03-16 15:26:02,015] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 31.72 GB, percent = 6.3% 0: [2023-03-16 15:26:02,015] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed Final Optimizer = FusedAdam 0: [2023-03-16 15:26:02,015] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed using client LR scheduler 0: [2023-03-16 15:26:02,015] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed LR Scheduler = 0: [2023-03-16 15:26:02,015] [INFO] [logging.py:68:log_dist] [Rank 0] step=0, skipped=0, lr=[0.0, 0.0, 0.0], mom=[(0.9, 0.999), (0.9, 0.999), (0.9, 0.999)] 0: [2023-03-16 15:26:02,016] [INFO] [config.py:1007:print] DeepSpeedEngine configuration: 0: [2023-03-16 15:26:02,016] [INFO] [config.py:1011:print] activation_checkpointing_config { 0: "partition_activations": false, 0: "contiguous_memory_optimization": false, 0: "cpu_checkpointing": false, 0: "number_checkpoints": null, 0: "synchronize_checkpoint_boundary": false, 0: "profile": false 0: } 0: [2023-03-16 15:26:02,016] [INFO] [config.py:1011:print] aio_config ................... {'block_size': 1048576, 'queue_depth': 8, 'thread_count': 1, 'single_submit': False, 'overlap_events': True} 0: [2023-03-16 15:26:02,016] [INFO] [config.py:1011:print] amp_enabled .................. False 0: [2023-03-16 15:26:02,016] [INFO] [config.py:1011:print] amp_params ................... False 0: [2023-03-16 15:26:02,016] [INFO] [config.py:1011:print] autotuning_config ............ { 0: "enabled": false, 0: "start_step": null, 0: "end_step": null, 0: "metric_path": null, 0: "arg_mappings": null, 0: "metric": "throughput", 0: "model_info": null, 0: "results_dir": "/pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/autotuning_results", 0: "exps_dir": "/pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/autotuning_exps", 0: "overwrite": true, 0: "fast": true, 0: "start_profile_step": 3, 0: "end_profile_step": 5, 0: "tuner_type": "gridsearch", 0: "tuner_early_stopping": 5, 0: "tuner_num_trials": 50, 0: "model_info_path": null, 0: "mp_size": 1, 0: "max_train_batch_size": null, 0: "min_train_batch_size": 1, 0: "max_train_micro_batch_size_per_gpu": 1.024000e+03, 0: "min_train_micro_batch_size_per_gpu": 1, 0: "num_tuning_micro_batch_sizes": 3 0: } 0: [2023-03-16 15:26:02,016] [INFO] [config.py:1011:print] bfloat16_enabled ............. True 0: [2023-03-16 15:26:02,016] [INFO] [config.py:1011:print] checkpoint_parallel_write_pipeline False 0: [2023-03-16 15:26:02,016] [INFO] [config.py:1011:print] checkpoint_tag_validation_enabled True 0: [2023-03-16 15:26:02,016] [INFO] [config.py:1011:print] checkpoint_tag_validation_fail False 0: [2023-03-16 15:26:02,016] [INFO] [config.py:1011:print] comms_config ................. 0: [2023-03-16 15:26:02,016] [INFO] [config.py:1011:print] communication_data_type ...... None 0: [2023-03-16 15:26:02,016] [INFO] [config.py:1011:print] compression_config ........... {'weight_quantization': {'shared_parameters': {'enabled': False, 'quantizer_kernel': False, 'schedule_offset': 0, 'quantize_groups': 1, 'quantize_verbose': False, 'quantization_type': 'symmetric', 'quantize_weight_in_forward': False, 'rounding': 'nearest', 'fp16_mixed_quantize': False, 'quantize_change_ratio': 0.001}, 'different_groups': {}}, 'activation_quantization': {'shared_parameters': {'enabled': False, 'quantization_type': 'symmetric', 'range_calibration': 'dynamic', 'schedule_offset': 1000}, 'different_groups': {}}, 'sparse_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'row_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'head_pruning': {'shared_parameters': {'enabled': False, 'method': 'topk', 'schedule_offset': 1000}, 'different_groups': {}}, 'channel_pruning': {'shared_pa 0: rameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'layer_reduction': {'enabled': False}} 0: [2023-03-16 15:26:02,017] [INFO] [config.py:1011:print] curriculum_enabled ........... False 0: [2023-03-16 15:26:02,017] [INFO] [config.py:1011:print] curriculum_params ............ False 0: [2023-03-16 15:26:02,017] [INFO] [config.py:1011:print] dataloader_drop_last ......... False 0: [2023-03-16 15:26:02,017] [INFO] [config.py:1011:print] disable_allgather ............ False 0: [2023-03-16 15:26:02,017] [INFO] [config.py:1011:print] dump_state ................... False 0: [2023-03-16 15:26:02,017] [INFO] [config.py:1011:print] dynamic_loss_scale_args ...... None 0: [2023-03-16 15:26:02,017] [INFO] [config.py:1011:print] eigenvalue_enabled ........... False 0: [2023-03-16 15:26:02,017] [INFO] [config.py:1011:print] eigenvalue_gas_boundary_resolution 1 0: [2023-03-16 15:26:02,017] [INFO] [config.py:1011:print] eigenvalue_layer_name ........ bert.encoder.layer 0: [2023-03-16 15:26:02,017] [INFO] [config.py:1011:print] eigenvalue_layer_num ......... 0 0: [2023-03-16 15:26:02,017] [INFO] [config.py:1011:print] eigenvalue_max_iter .......... 100 0: [2023-03-16 15:26:02,017] [INFO] [config.py:1011:print] eigenvalue_stability ......... 1e-06 0: [2023-03-16 15:26:02,017] [INFO] [config.py:1011:print] eigenvalue_tol ............... 0.01 0: [2023-03-16 15:26:02,017] [INFO] [config.py:1011:print] eigenvalue_verbose ........... False 0: [2023-03-16 15:26:02,017] [INFO] [config.py:1011:print] elasticity_enabled ........... False 0: [2023-03-16 15:26:02,017] [INFO] [config.py:1011:print] flops_profiler_config ........ { 0: "enabled": false, 0: "profile_step": 1, 0: "module_depth": -1, 0: "top_modules": 1, 0: "detailed": true, 0: "output_file": null 0: } 0: [2023-03-16 15:26:02,017] [INFO] [config.py:1011:print] fp16_auto_cast ............... None 0: [2023-03-16 15:26:02,017] [INFO] [config.py:1011:print] fp16_enabled ................. False 0: [2023-03-16 15:26:02,017] [INFO] [config.py:1011:print] fp16_master_weights_and_gradients False 0: [2023-03-16 15:26:02,017] [INFO] [config.py:1011:print] global_rank .................. 0 0: [2023-03-16 15:26:02,017] [INFO] [config.py:1011:print] gradient_accumulation_steps .. 1 0: [2023-03-16 15:26:02,017] [INFO] [config.py:1011:print] gradient_clipping ............ 1.0 0: [2023-03-16 15:26:02,017] [INFO] [config.py:1011:print] gradient_predivide_factor .... 1.0 0: [2023-03-16 15:26:02,017] [INFO] [config.py:1011:print] initial_dynamic_scale ........ 1 0: [2023-03-16 15:26:02,017] [INFO] [config.py:1011:print] load_universal_checkpoint .... False 0: [2023-03-16 15:26:02,017] [INFO] [config.py:1011:print] loss_scale ................... 1.0 0: [2023-03-16 15:26:02,017] [INFO] [config.py:1011:print] memory_breakdown ............. False 0: [2023-03-16 15:26:02,017] [INFO] [config.py:1011:print] monitor_config ............... 0: [2023-03-16 15:26:02,017] [INFO] [config.py:1011:print] nebula_config ................ { 0: "enabled": false, 0: "persistent_storage_path": null, 0: "persistent_time_interval": 100, 0: "num_of_version_in_retention": 2, 0: "enable_nebula_load": true, 0: "load_path": null 0: } 0: [2023-03-16 15:26:02,017] [INFO] [config.py:1011:print] optimizer_legacy_fusion ...... False 0: [2023-03-16 15:26:02,017] [INFO] [config.py:1011:print] optimizer_name ............... None 0: [2023-03-16 15:26:02,017] [INFO] [config.py:1011:print] optimizer_params ............. None 0: [2023-03-16 15:26:02,017] [INFO] [config.py:1011:print] pipeline ..................... {'stages': 'auto', 'partition': 'best', 'seed_layers': False, 'activation_checkpoint_interval': 0} 0: [2023-03-16 15:26:02,017] [INFO] [config.py:1011:print] pld_enabled .................. False 0: [2023-03-16 15:26:02,017] [INFO] [config.py:1011:print] pld_params ................... False 0: [2023-03-16 15:26:02,017] [INFO] [config.py:1011:print] prescale_gradients ........... False 0: [2023-03-16 15:26:02,017] [INFO] [config.py:1011:print] scheduler_name ............... None 0: [2023-03-16 15:26:02,017] [INFO] [config.py:1011:print] scheduler_params ............. None 0: [2023-03-16 15:26:02,018] [INFO] [config.py:1011:print] sparse_attention ............. None 0: [2023-03-16 15:26:02,018] [INFO] [config.py:1011:print] sparse_gradients_enabled ..... False 0: [2023-03-16 15:26:02,018] [INFO] [config.py:1011:print] steps_per_print .............. 2000 0: [2023-03-16 15:26:02,018] [INFO] [config.py:1011:print] train_batch_size ............. 256 0: [2023-03-16 15:26:02,018] [INFO] [config.py:1011:print] train_micro_batch_size_per_gpu 16 0: [2023-03-16 15:26:02,018] [INFO] [config.py:1011:print] use_node_local_storage ....... False 0: [2023-03-16 15:26:02,018] [INFO] [config.py:1011:print] wall_clock_breakdown ......... False 0: [2023-03-16 15:26:02,018] [INFO] [config.py:1011:print] world_size ................... 16 0: [2023-03-16 15:26:02,018] [INFO] [config.py:1011:print] zero_allow_untested_optimizer False 0: [2023-03-16 15:26:02,018] [INFO] [config.py:1011:print] zero_config .................. stage=0 contiguous_gradients=True reduce_scatter=True reduce_bucket_size=500000000 allgather_partitions=True allgather_bucket_size=500000000 overlap_comm=False load_from_fp32_weights=True elastic_checkpoint=False offload_param=None offload_optimizer=None sub_group_size=1000000000 cpu_offload_param=None cpu_offload_use_pin_memory=None cpu_offload=None prefetch_bucket_size=50000000 param_persistence_threshold=100000 model_persistence_threshold=9223372036854775807 max_live_parameters=1000000000 max_reuse_distance=1000000000 gather_16bit_weights_on_model_save=False stage3_gather_fp16_weights_on_model_save=False ignore_unused_parameters=True legacy_stage1=False round_robin_gradients=False 0: [2023-03-16 15:26:02,018] [INFO] [config.py:1011:print] zero_enabled ................. False 0: [2023-03-16 15:26:02,018] [INFO] [config.py:1011:print] zero_optimization_stage ...... 0 0: [2023-03-16 15:26:02,018] [INFO] [config.py:996:print_user_config] json = { 0: "train_micro_batch_size_per_gpu": 16, 0: "train_batch_size": 256, 0: "gradient_clipping": 1.0, 0: "zero_optimization": { 0: "stage": 0 0: }, 0: "bf16": { 0: "enabled": true 0: }, 0: "steps_per_print": 2.000000e+03, 0: "wall_clock_breakdown": false 0: } 0: Time to load utils op: 0.0004336833953857422 seconds 0: [2023-03-16 15:26:02,018] [INFO] [engine.py:87:__init__] CONFIG: micro_batches=1 micro_batch_size=16 0: [2023-03-16 15:26:02,066] [INFO] [engine.py:145:__init__] RANK=0 STAGE=0 LAYERS=33 [0, 33) STAGE_PARAMS=1096338432 (1096.338M) TOTAL_PARAMS=1096338432 (1096.338M) UNIQUE_PARAMS=1096338432 (1096.338M) 0: [2023-03-16 15:26:02,068] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1100m100m/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 0: WARNING: could not find the metadata file checkpoints_1b1100m100m 0: will not load any checkpoints and will start from random 0: [2023-03-16 15:26:02,068] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1100m100m/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 0: [2023-03-16 15:26:02,068] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1100m100m/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 0: [2023-03-16 15:26:02,068] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1100m100m/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 0: [2023-03-16 15:26:02,068] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1100m100m/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 0: [2023-03-16 15:26:02,068] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1100m100m/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 0: [2023-03-16 15:26:02,068] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1100m100m/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 0: [2023-03-16 15:26:02,068] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1100m100m/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 1: [2023-03-16 15:26:02,069] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1100m100m/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 1: [2023-03-16 15:26:02,069] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1100m100m/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 1: [2023-03-16 15:26:02,069] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1100m100m/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 1: [2023-03-16 15:26:02,069] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1100m100m/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 1: [2023-03-16 15:26:02,069] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1100m100m/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 1: [2023-03-16 15:26:02,070] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1100m100m/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 1: [2023-03-16 15:26:02,070] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1100m100m/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 1: [2023-03-16 15:26:02,070] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1100m100m/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 1: time (ms) | load-checkpoint: 0.93 0: estimated model parameters: 1.096338432 0: estimated model parameters without embeddings: 1.002523648 0: [after model, optimizer, and learning rate scheduler are built] datetime: 2023-03-16 15:26:02 0: > building train, validation, and test datasets ... 0: > datasets target sizes (minimum size): 0: train: 48828 0: validation: 256 0: test: 256 0: > building train, validation, and test datasets for GPT ... 0: > building dataset index ... 0: reading sizes... 0: reading pointers... 0: reading document index... 0: creating numpy buffer of mmap... 0: creating memory view of numpy buffer... 0: > finished creating indexed dataset in 0.007424 seconds 0: number of documents: 208931 0: > dataset split: 0: train: 0: document indices in [0, 208931) total of 208931 documents 0: > loading doc-idx mapping from /scratch/project_462000119/data/c4_subsampled/gpt2tok_c4_en_100M_text_document_train_indexmap_48828ns_2048sl_1234s_doc_idx.npy 0: > loading sample-idx mapping from /scratch/project_462000119/data/c4_subsampled/gpt2tok_c4_en_100M_text_document_train_indexmap_48828ns_2048sl_1234s_sample_idx.npy 0: > loading shuffle-idx mapping from /scratch/project_462000119/data/c4_subsampled/gpt2tok_c4_en_100M_text_document_train_indexmap_48828ns_2048sl_1234s_shuffle_idx.npy 0: loaded indexed file in 0.084 seconds 0: total number of samples: 97610 0: total number of epochs: 2 0: > building dataset index ... 0: reading sizes... 0: reading pointers... 0: reading document index... 0: creating numpy buffer of mmap... 0: creating memory view of numpy buffer... 0: > finished creating indexed dataset in 0.050375 seconds 0: number of documents: 364608 0: > dataset split: 0: validation: 0: document indices in [0, 364608) total of 364608 documents 0: > loading doc-idx mapping from /scratch/project_462000119/data/c4_validation/gpt2tok_c4validation_rerun_text_document_validation_indexmap_256ns_2048sl_1234s_doc_idx.npy 0: > loading sample-idx mapping from /scratch/project_462000119/data/c4_validation/gpt2tok_c4validation_rerun_text_document_validation_indexmap_256ns_2048sl_1234s_sample_idx.npy 0: > loading shuffle-idx mapping from /scratch/project_462000119/data/c4_validation/gpt2tok_c4validation_rerun_text_document_validation_indexmap_256ns_2048sl_1234s_shuffle_idx.npy 0: loaded indexed file in 0.073 seconds 0: total number of samples: 84978 0: total number of epochs: 1 0: > finished creating GPT datasets ... 0: [after dataloaders are built] datetime: 2023-03-16 15:26:14 0: done with setup ... 0: training ... 0: Number of parameters: [tensor rank - pipeline rank] w/ and w/o embeddings: 1: time (ms) | model-and-optimizer-setup: 12901.55 | train/valid/test-data-iterators-setup: 10989.46 0: [000-000] 1.0963B / 1.0025B 0: [before the start of training step] datetime: 2023-03-16 15:26:14 0: [2023-03-16 15:26:14,361] [INFO] [checkpointing.py:553:forward] Activation Checkpointing Information 0: [2023-03-16 15:26:14,361] [INFO] [checkpointing.py:554:forward] ----Partition Activations False, CPU CHECKPOINTING False 0: [2023-03-16 15:26:14,361] [INFO] [checkpointing.py:557:forward] ----contiguous Memory Checkpointing False with None total layers 0: [2023-03-16 15:26:14,361] [INFO] [checkpointing.py:560:forward] ----Synchronization False 0: [2023-03-16 15:26:14,361] [INFO] [checkpointing.py:561:forward] ----Profiling time in checkpointing False 0: [Rank 0] (after 10 iterations) memory (MB) | allocated: 15441.7236328125 | max allocated: 28072.09814453125 | reserved: 35526.0 | max reserved: 35526.0 1: iteration 10/ 190 | consumed samples: 2560 | consumed tokens: 5242880 | elapsed time per iteration (s): 6.73 | learning rate: 1.992E-04 | global batch size: 256 | lm loss: 1.004626E+01 | grad norm: 2.534 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 38.030 | TFLOPs: 49.08 | 1: iteration 20/ 190 | consumed samples: 5120 | consumed tokens: 10485760 | elapsed time per iteration (s): 5.56 | learning rate: 1.960E-04 | global batch size: 256 | lm loss: 7.939323E+00 | grad norm: 1.727 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 46.061 | TFLOPs: 59.45 | 1: iteration 30/ 190 | consumed samples: 7680 | consumed tokens: 15728640 | elapsed time per iteration (s): 5.56 | learning rate: 1.903E-04 | global batch size: 256 | lm loss: 7.717110E+00 | grad norm: 0.604 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 46.009 | TFLOPs: 59.38 | 1: iteration 40/ 190 | consumed samples: 10240 | consumed tokens: 20971520 | elapsed time per iteration (s): 5.57 | learning rate: 1.825E-04 | global batch size: 256 | lm loss: 7.652970E+00 | grad norm: 0.787 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 45.993 | TFLOPs: 59.36 | 1: iteration 50/ 190 | consumed samples: 12800 | consumed tokens: 26214400 | elapsed time per iteration (s): 5.58 | learning rate: 1.727E-04 | global batch size: 256 | lm loss: 7.558895E+00 | grad norm: 0.418 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 45.873 | TFLOPs: 59.20 | 1: iteration 60/ 190 | consumed samples: 15360 | consumed tokens: 31457280 | elapsed time per iteration (s): 5.60 | learning rate: 1.611E-04 | global batch size: 256 | lm loss: 7.447041E+00 | grad norm: 0.748 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 45.736 | TFLOPs: 59.03 | 1: iteration 70/ 190 | consumed samples: 17920 | consumed tokens: 36700160 | elapsed time per iteration (s): 5.61 | learning rate: 1.482E-04 | global batch size: 256 | lm loss: 7.338550E+00 | grad norm: 0.516 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 45.645 | TFLOPs: 58.91 | 1: iteration 80/ 190 | consumed samples: 20480 | consumed tokens: 41943040 | elapsed time per iteration (s): 5.62 | learning rate: 1.341E-04 | global batch size: 256 | lm loss: 7.206721E+00 | grad norm: 0.727 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 45.531 | TFLOPs: 58.76 | 1: iteration 90/ 190 | consumed samples: 23040 | consumed tokens: 47185920 | elapsed time per iteration (s): 5.63 | learning rate: 1.194E-04 | global batch size: 256 | lm loss: 7.072395E+00 | grad norm: 0.725 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 45.471 | TFLOPs: 58.69 | 1: iteration 100/ 190 | consumed samples: 25600 | consumed tokens: 52428800 | elapsed time per iteration (s): 5.63 | learning rate: 1.045E-04 | global batch size: 256 | lm loss: 6.971128E+00 | grad norm: 0.592 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 45.444 | TFLOPs: 58.65 | 1: iteration 110/ 190 | consumed samples: 28160 | consumed tokens: 57671680 | elapsed time per iteration (s): 5.63 | learning rate: 8.969E-05 | global batch size: 256 | lm loss: 6.881360E+00 | grad norm: 0.329 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 45.437 | TFLOPs: 58.64 | 1: iteration 120/ 190 | consumed samples: 30720 | consumed tokens: 62914560 | elapsed time per iteration (s): 5.64 | learning rate: 7.545E-05 | global batch size: 256 | lm loss: 6.831157E+00 | grad norm: 0.305 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 45.407 | TFLOPs: 58.60 | 1: iteration 130/ 190 | consumed samples: 33280 | consumed tokens: 68157440 | elapsed time per iteration (s): 5.64 | learning rate: 6.217E-05 | global batch size: 256 | lm loss: 6.783172E+00 | grad norm: 0.289 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 45.415 | TFLOPs: 58.61 | 1: iteration 140/ 190 | consumed samples: 35840 | consumed tokens: 73400320 | elapsed time per iteration (s): 5.64 | learning rate: 5.020E-05 | global batch size: 256 | lm loss: 6.730360E+00 | grad norm: 0.263 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 45.416 | TFLOPs: 58.61 | 1: iteration 150/ 190 | consumed samples: 38400 | consumed tokens: 78643200 | elapsed time per iteration (s): 5.64 | learning rate: 3.989E-05 | global batch size: 256 | lm loss: 6.698553E+00 | grad norm: 0.245 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 45.416 | TFLOPs: 58.61 | 1: iteration 160/ 190 | consumed samples: 40960 | consumed tokens: 83886080 | elapsed time per iteration (s): 5.64 | learning rate: 3.151E-05 | global batch size: 256 | lm loss: 6.685641E+00 | grad norm: 0.176 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 45.426 | TFLOPs: 58.63 | 1: iteration 170/ 190 | consumed samples: 43520 | consumed tokens: 89128960 | elapsed time per iteration (s): 5.64 | learning rate: 2.530E-05 | global batch size: 256 | lm loss: 6.662083E+00 | grad norm: 0.182 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 45.408 | TFLOPs: 58.60 | 1: iteration 180/ 190 | consumed samples: 46080 | consumed tokens: 94371840 | elapsed time per iteration (s): 5.64 | learning rate: 2.143E-05 | global batch size: 256 | lm loss: 6.634340E+00 | grad norm: 0.179 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 45.415 | TFLOPs: 58.61 | 1: iteration 190/ 190 | consumed samples: 48640 | consumed tokens: 99614720 | elapsed time per iteration (s): 5.64 | learning rate: 2.001E-05 | global batch size: 256 | lm loss: 6.631590E+00 | grad norm: 0.181 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 45.409 | TFLOPs: 58.60 | 0: [after training is done] datetime: 2023-03-16 15:44:12 0: saving checkpoint at iteration 190 to checkpoints_1b1100m100m 1: ----------------------------------------------------------------------------------------------------------------- 1: validation loss at the end of training for val data | lm loss value: 6.584912E+00 | lm loss PPL: 7.240875E+02 | 1: ----------------------------------------------------------------------------------------------------------------- 0: [2023-03-16 15:44:13,607] [INFO] [logging.py:68:log_dist] [Rank 0] [Torch] Checkpoint global_step190 is begin to save! 0: [2023-03-16 15:44:13,706] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1100m100m/global_step190/layer_01-model_00-model_states.pt... 0: [2023-03-16 15:44:13,924] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1100m100m/global_step190/layer_01-model_00-model_states.pt. 0: [2023-03-16 15:44:13,925] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1100m100m/global_step190/layer_03-model_00-model_states.pt... 0: [2023-03-16 15:44:14,007] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1100m100m/global_step190/layer_03-model_00-model_states.pt. 0: [2023-03-16 15:44:14,007] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1100m100m/global_step190/layer_04-model_00-model_states.pt... 0: [2023-03-16 15:44:14,082] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1100m100m/global_step190/layer_04-model_00-model_states.pt. 0: [2023-03-16 15:44:14,083] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1100m100m/global_step190/layer_05-model_00-model_states.pt... 0: [2023-03-16 15:44:14,156] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1100m100m/global_step190/layer_05-model_00-model_states.pt. 0: [2023-03-16 15:44:14,156] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1100m100m/global_step190/layer_06-model_00-model_states.pt... 0: [2023-03-16 15:44:14,229] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1100m100m/global_step190/layer_06-model_00-model_states.pt. 0: [2023-03-16 15:44:14,229] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1100m100m/global_step190/layer_07-model_00-model_states.pt... 0: [2023-03-16 15:44:14,301] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1100m100m/global_step190/layer_07-model_00-model_states.pt. 0: [2023-03-16 15:44:14,302] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1100m100m/global_step190/layer_08-model_00-model_states.pt... 0: [2023-03-16 15:44:14,375] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1100m100m/global_step190/layer_08-model_00-model_states.pt. 0: [2023-03-16 15:44:14,375] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1100m100m/global_step190/layer_09-model_00-model_states.pt... 0: [2023-03-16 15:44:14,447] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1100m100m/global_step190/layer_09-model_00-model_states.pt. 0: [2023-03-16 15:44:14,447] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1100m100m/global_step190/layer_10-model_00-model_states.pt... 0: [2023-03-16 15:44:14,521] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1100m100m/global_step190/layer_10-model_00-model_states.pt. 0: [2023-03-16 15:44:14,521] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1100m100m/global_step190/layer_11-model_00-model_states.pt... 0: [2023-03-16 15:44:14,594] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1100m100m/global_step190/layer_11-model_00-model_states.pt. 0: [2023-03-16 15:44:14,594] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1100m100m/global_step190/layer_12-model_00-model_states.pt... 0: [2023-03-16 15:44:14,665] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1100m100m/global_step190/layer_12-model_00-model_states.pt. 0: [2023-03-16 15:44:14,665] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1100m100m/global_step190/layer_13-model_00-model_states.pt... 0: [2023-03-16 15:44:14,741] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1100m100m/global_step190/layer_13-model_00-model_states.pt. 0: [2023-03-16 15:44:14,741] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1100m100m/global_step190/layer_14-model_00-model_states.pt... 0: [2023-03-16 15:44:14,812] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1100m100m/global_step190/layer_14-model_00-model_states.pt. 0: [2023-03-16 15:44:14,812] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1100m100m/global_step190/layer_15-model_00-model_states.pt... 0: [2023-03-16 15:44:14,886] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1100m100m/global_step190/layer_15-model_00-model_states.pt. 0: [2023-03-16 15:44:14,887] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1100m100m/global_step190/layer_16-model_00-model_states.pt... 0: [2023-03-16 15:44:14,960] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1100m100m/global_step190/layer_16-model_00-model_states.pt. 0: [2023-03-16 15:44:14,960] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1100m100m/global_step190/layer_17-model_00-model_states.pt... 0: [2023-03-16 15:44:15,032] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1100m100m/global_step190/layer_17-model_00-model_states.pt. 0: [2023-03-16 15:44:15,033] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1100m100m/global_step190/layer_18-model_00-model_states.pt... 0: [2023-03-16 15:44:15,108] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1100m100m/global_step190/layer_18-model_00-model_states.pt. 0: [2023-03-16 15:44:15,109] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1100m100m/global_step190/layer_19-model_00-model_states.pt... 0: [2023-03-16 15:44:15,183] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1100m100m/global_step190/layer_19-model_00-model_states.pt. 0: [2023-03-16 15:44:15,184] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1100m100m/global_step190/layer_20-model_00-model_states.pt... 0: [2023-03-16 15:44:15,255] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1100m100m/global_step190/layer_20-model_00-model_states.pt. 0: [2023-03-16 15:44:15,256] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1100m100m/global_step190/layer_21-model_00-model_states.pt... 0: [2023-03-16 15:44:15,332] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1100m100m/global_step190/layer_21-model_00-model_states.pt. 0: [2023-03-16 15:44:15,332] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1100m100m/global_step190/layer_22-model_00-model_states.pt... 0: [2023-03-16 15:44:15,407] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1100m100m/global_step190/layer_22-model_00-model_states.pt. 0: [2023-03-16 15:44:15,408] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1100m100m/global_step190/layer_23-model_00-model_states.pt... 0: [2023-03-16 15:44:15,481] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1100m100m/global_step190/layer_23-model_00-model_states.pt. 0: [2023-03-16 15:44:15,481] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1100m100m/global_step190/layer_24-model_00-model_states.pt... 0: [2023-03-16 15:44:15,556] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1100m100m/global_step190/layer_24-model_00-model_states.pt. 0: [2023-03-16 15:44:15,556] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1100m100m/global_step190/layer_25-model_00-model_states.pt... 0: [2023-03-16 15:44:15,630] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1100m100m/global_step190/layer_25-model_00-model_states.pt. 0: [2023-03-16 15:44:15,630] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1100m100m/global_step190/layer_26-model_00-model_states.pt... 0: [2023-03-16 15:44:15,705] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1100m100m/global_step190/layer_26-model_00-model_states.pt. 0: [2023-03-16 15:44:15,706] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1100m100m/global_step190/layer_27-model_00-model_states.pt... 0: [2023-03-16 15:44:15,780] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1100m100m/global_step190/layer_27-model_00-model_states.pt. 0: [2023-03-16 15:44:15,780] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1100m100m/global_step190/layer_28-model_00-model_states.pt... 0: [2023-03-16 15:44:15,854] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1100m100m/global_step190/layer_28-model_00-model_states.pt. 0: [2023-03-16 15:44:15,854] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1100m100m/global_step190/layer_30-model_00-model_states.pt... 0: [2023-03-16 15:44:15,855] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1100m100m/global_step190/layer_30-model_00-model_states.pt. 0: [2023-03-16 15:44:15,857] [INFO] [logging.py:68:log_dist] [Rank 0] Saving model checkpoint: checkpoints_1b1100m100m/global_step190/mp_rank_00_model_states.pt 0: [2023-03-16 15:44:15,857] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1100m100m/global_step190/mp_rank_00_model_states.pt... 0: [2023-03-16 15:44:15,861] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1100m100m/global_step190/mp_rank_00_model_states.pt. 0: [2023-03-16 15:44:15,871] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1100m100m/global_step190/bf16_zero_pp_rank_0_mp_rank_00_optim_states.pt... 0: [2023-03-16 15:44:15,871] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1100m100m/global_step190/bf16_zero_pp_rank_7_mp_rank_00_optim_states.pt... 0: [2023-03-16 15:44:15,871] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1100m100m/global_step190/bf16_zero_pp_rank_5_mp_rank_00_optim_states.pt... 0: [2023-03-16 15:44:15,871] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1100m100m/global_step190/bf16_zero_pp_rank_3_mp_rank_00_optim_states.pt... 0: [2023-03-16 15:44:15,871] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1100m100m/global_step190/bf16_zero_pp_rank_1_mp_rank_00_optim_states.pt... 0: [2023-03-16 15:44:15,871] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1100m100m/global_step190/bf16_zero_pp_rank_4_mp_rank_00_optim_states.pt... 0: [2023-03-16 15:44:15,871] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1100m100m/global_step190/bf16_zero_pp_rank_2_mp_rank_00_optim_states.pt... 1: [2023-03-16 15:44:15,871] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1100m100m/global_step190/bf16_zero_pp_rank_11_mp_rank_00_optim_states.pt... 0: [2023-03-16 15:44:15,871] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1100m100m/global_step190/bf16_zero_pp_rank_6_mp_rank_00_optim_states.pt... 1: [2023-03-16 15:44:15,871] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1100m100m/global_step190/bf16_zero_pp_rank_12_mp_rank_00_optim_states.pt... 1: [2023-03-16 15:44:15,871] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1100m100m/global_step190/bf16_zero_pp_rank_8_mp_rank_00_optim_states.pt... 1: [2023-03-16 15:44:15,871] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1100m100m/global_step190/bf16_zero_pp_rank_10_mp_rank_00_optim_states.pt... 1: [2023-03-16 15:44:15,871] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1100m100m/global_step190/bf16_zero_pp_rank_9_mp_rank_00_optim_states.pt... 1: [2023-03-16 15:44:15,871] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1100m100m/global_step190/bf16_zero_pp_rank_14_mp_rank_00_optim_states.pt... 1: [2023-03-16 15:44:15,871] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1100m100m/global_step190/bf16_zero_pp_rank_13_mp_rank_00_optim_states.pt... 1: [2023-03-16 15:44:15,871] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1100m100m/global_step190/bf16_zero_pp_rank_15_mp_rank_00_optim_states.pt... 0: [2023-03-16 15:44:16,908] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1100m100m/global_step190/bf16_zero_pp_rank_7_mp_rank_00_optim_states.pt. 0: [2023-03-16 15:44:16,908] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1100m100m/global_step190/bf16_zero_pp_rank_7_mp_rank_00_optim_states.pt 0: [2023-03-16 15:44:16,908] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step190 is ready now! 1: [2023-03-16 15:44:16,957] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1100m100m/global_step190/bf16_zero_pp_rank_9_mp_rank_00_optim_states.pt. 1: [2023-03-16 15:44:16,957] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1100m100m/global_step190/bf16_zero_pp_rank_9_mp_rank_00_optim_states.pt 1: [2023-03-16 15:44:16,957] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step190 is ready now! 1: [2023-03-16 15:44:17,013] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1100m100m/global_step190/bf16_zero_pp_rank_11_mp_rank_00_optim_states.pt. 1: [2023-03-16 15:44:17,013] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1100m100m/global_step190/bf16_zero_pp_rank_11_mp_rank_00_optim_states.pt 1: [2023-03-16 15:44:17,013] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step190 is ready now! 1: [2023-03-16 15:44:17,033] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1100m100m/global_step190/bf16_zero_pp_rank_13_mp_rank_00_optim_states.pt. 1: [2023-03-16 15:44:17,033] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1100m100m/global_step190/bf16_zero_pp_rank_13_mp_rank_00_optim_states.pt 1: [2023-03-16 15:44:17,033] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step190 is ready now! 0: [2023-03-16 15:44:17,223] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1100m100m/global_step190/bf16_zero_pp_rank_5_mp_rank_00_optim_states.pt. 0: [2023-03-16 15:44:17,223] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1100m100m/global_step190/bf16_zero_pp_rank_5_mp_rank_00_optim_states.pt 0: [2023-03-16 15:44:17,223] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step190 is ready now! 0: [2023-03-16 15:44:17,250] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1100m100m/global_step190/bf16_zero_pp_rank_1_mp_rank_00_optim_states.pt. 0: [2023-03-16 15:44:17,250] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1100m100m/global_step190/bf16_zero_pp_rank_1_mp_rank_00_optim_states.pt 0: [2023-03-16 15:44:17,250] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step190 is ready now! 0: [2023-03-16 15:44:17,250] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1100m100m/global_step190/bf16_zero_pp_rank_3_mp_rank_00_optim_states.pt. 0: [2023-03-16 15:44:17,250] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1100m100m/global_step190/bf16_zero_pp_rank_3_mp_rank_00_optim_states.pt 0: [2023-03-16 15:44:17,250] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step190 is ready now! 0: [2023-03-16 15:44:17,382] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1100m100m/global_step190/bf16_zero_pp_rank_2_mp_rank_00_optim_states.pt. 0: [2023-03-16 15:44:17,382] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1100m100m/global_step190/bf16_zero_pp_rank_2_mp_rank_00_optim_states.pt 0: [2023-03-16 15:44:17,382] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step190 is ready now! 0: [2023-03-16 15:44:17,497] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1100m100m/global_step190/bf16_zero_pp_rank_6_mp_rank_00_optim_states.pt. 0: [2023-03-16 15:44:17,497] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1100m100m/global_step190/bf16_zero_pp_rank_6_mp_rank_00_optim_states.pt 0: [2023-03-16 15:44:17,497] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step190 is ready now! 0: [2023-03-16 15:44:17,600] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1100m100m/global_step190/bf16_zero_pp_rank_0_mp_rank_00_optim_states.pt. 1: [2023-03-16 15:44:17,670] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1100m100m/global_step190/bf16_zero_pp_rank_12_mp_rank_00_optim_states.pt. 1: [2023-03-16 15:44:17,670] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1100m100m/global_step190/bf16_zero_pp_rank_12_mp_rank_00_optim_states.pt 1: [2023-03-16 15:44:17,670] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step190 is ready now! 1: [2023-03-16 15:44:17,726] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1100m100m/global_step190/bf16_zero_pp_rank_15_mp_rank_00_optim_states.pt. 1: [2023-03-16 15:44:17,726] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1100m100m/global_step190/bf16_zero_pp_rank_15_mp_rank_00_optim_states.pt 1: [2023-03-16 15:44:17,726] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step190 is ready now! 0: [2023-03-16 15:44:17,771] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1100m100m/global_step190/bf16_zero_pp_rank_4_mp_rank_00_optim_states.pt. 0: [2023-03-16 15:44:17,771] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1100m100m/global_step190/bf16_zero_pp_rank_4_mp_rank_00_optim_states.pt 0: [2023-03-16 15:44:17,771] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step190 is ready now! 1: [2023-03-16 15:44:17,850] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1100m100m/global_step190/bf16_zero_pp_rank_8_mp_rank_00_optim_states.pt. 1: [2023-03-16 15:44:17,850] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1100m100m/global_step190/bf16_zero_pp_rank_8_mp_rank_00_optim_states.pt 1: [2023-03-16 15:44:17,850] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step190 is ready now! 1: [2023-03-16 15:44:17,988] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1100m100m/global_step190/bf16_zero_pp_rank_10_mp_rank_00_optim_states.pt. 1: [2023-03-16 15:44:17,988] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1100m100m/global_step190/bf16_zero_pp_rank_10_mp_rank_00_optim_states.pt 1: [2023-03-16 15:44:17,988] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step190 is ready now! 1: [2023-03-16 15:44:18,196] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1100m100m/global_step190/bf16_zero_pp_rank_14_mp_rank_00_optim_states.pt. 1: [2023-03-16 15:44:18,196] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1100m100m/global_step190/bf16_zero_pp_rank_14_mp_rank_00_optim_states.pt 1: [2023-03-16 15:44:18,196] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step190 is ready now! 0: [2023-03-16 15:44:18,268] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1100m100m/global_step190/bf16_zero_pp_rank_0_mp_rank_00_optim_states.pt 0: [2023-03-16 15:44:18,268] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step190 is ready now! 0: successfully saved checkpoint at iteration 190 to checkpoints_1b1100m100m END 3322154: Thu 16 Mar 2023 03:44:23 PM EET