ales commited on
Commit
1404413
·
1 Parent(s): 6878f29

Training in progress, step 310

Browse files
preprocessor_config.json CHANGED
@@ -5,7 +5,7 @@
5
  "hop_length": 160,
6
  "mel_filters": [
7
  [
8
- 0.0,
9
  0.02486259490251541,
10
  0.0,
11
  0.0,
 
5
  "hop_length": 160,
6
  "mel_filters": [
7
  [
8
+ -0.0,
9
  0.02486259490251541,
10
  0.0,
11
  0.0,
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:2db0da2370cc8ac5eb9e858c9cb2c3f9ca6742fda88a9254a81831c8bca88971
3
  size 151098921
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:40def94302d63cf5ec805ccc463bcc048d769c47609011255208c65d23c301fa
3
  size 151098921
runs/Dec14_10-16-01_129-213-88-66/1671012968.620211/events.out.tfevents.1671012968.129-213-88-66.80950.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7bc4b0ba02400bebee01f95df25f17c58c2a196fd95ade083e1502546f1f98bb
3
+ size 5884
runs/Dec14_10-16-01_129-213-88-66/events.out.tfevents.1671012968.129-213-88-66.80950.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e2c0da368c29a30826fa13f3b91f15eaf95a457be468299c4332e6d0e32cd9bd
3
+ size 4136
runs/Dec14_10-18-27_129-213-88-66/1671013113.7664006/events.out.tfevents.1671013113.129-213-88-66.81512.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9e029656fcd3f7dcdd024f8f861f07d7c7286fd410baeacce6ed1dc19a55fa49
3
+ size 5884
runs/Dec14_10-18-27_129-213-88-66/events.out.tfevents.1671013113.129-213-88-66.81512.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:442f4da3f922983e2d2c4aa1c0e653c986dcb0b4a8c920a116cdcaed971ce010
3
+ size 4746
src/readme.md ADDED
@@ -0,0 +1,124 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## Description
2
+
3
+ Fine-tuning [OpenAI Whisper](https://github.com/openai/whisper) model for Belarusian language during
4
+ [Whisper fine-tuning Event](https://github.com/huggingface/community-events/tree/main/whisper-fine-tuning-event)
5
+ hosted by HuggingFace x Lambda.
6
+
7
+ The code in this repository is a modified version of code from
8
+ [Whisper fine-tuning Event](https://github.com/huggingface/community-events/tree/main/whisper-fine-tuning-event) repo.
9
+
10
+ ## Tips:
11
+ * start with a port worwarding to monitor Tensorboard logs on local computer:
12
+ ```
13
+ ssh <remote-address> -L <local_port>:localhost:<remote_tensorboard_port>
14
+ ```
15
+ * Train with redirecting output to a file using `tee`
16
+
17
+ ## Fine-tuning todos:
18
+ * perform evaluation of fine-tuned model on CommonVoice test set
19
+ * Learning rate:
20
+ * max learning rate is not the same as LR passed as a parameter to training script. it is actually lower.
21
+ * when resuming training, LR scheduling behaves incorrectly
22
+ * check exact sizes of train, eval, test sets of CommonVoice 11
23
+
24
+ ## Resuming training from exising checkpoint
25
+ When resuming training from existing checkpoint:
26
+ * it's better to save all `checkpoint-\d+` dirs. better not to rely on data saved to `output_dir` because:
27
+ * not all data is saved to `output_dir`. e.g. following files are not saved to `output_dir`:
28
+ `optimizer.pt`, `rng_state.pth`, `scaler.pt`, `scheduler.pt`. so can't resume training in a correct way from
29
+ data saved to `output_dir`
30
+ * when resuming training from `output_dir` as a checkpoint dir, model saved to `output_dir` can be worse than
31
+ previously save (need to investifate further. but such happened already)
32
+ * learning rate gets reset if passing same parameter value to training script as in previour run.<br>
33
+ need to provide learning rate from the last step of previous run to continue
34
+ training in a correct way
35
+ * however even if passing learning rate from the last step, in the new run it has different value than expected
36
+ * probably because last checkpont was chosen incorrectly
37
+ * or learning rate is treated as a starting learning rate at step 0 and not on step X (where we resume).<br>
38
+ need to try to pass same LR that was passes as a starting LR to the very first run
39
+ * it's unclear whether decision on saving current model
40
+ is made by comparing current metrics with metrics of the best checkpoint. I guess model with worse performance
41
+ will not overwrite best model checkpoint already exising in the output dir, but need to double check.
42
+ * we can set `ignore_data_skip=True` Training argument not to
43
+ skip data items already passed to a model - that will save time on data loads.
44
+ * it's unclear whether order of input items in the train set (that is shuffled) will be the same
45
+ across multiple reruns - i.e. it's unclear whether sampling is the same across reruns.
46
+ * if the sampling is the same across reruns, `ignore_data_skip=True` will lead to same items been passed to a model
47
+ in current run. it's OK if previous run ended with large step value on the last epoch.
48
+ if not, the same elements from the same epoch will be passed to a model again.
49
+
50
+ ## Questions:
51
+ * What checkpoint (best, I guess) is saved in the `output_dir`?
52
+ How is it overwritten when resuming training from existing checkpoint?
53
+
54
+ ### Prepended tokens
55
+ * Why are there following lines in Data Collator?
56
+ ```python
57
+ # if bos token is appended in previous tokenization step,
58
+ # cut bos token here as it's append later anyways
59
+ if (labels[:, 0] == self.decoder_start_token_id).all().cpu().item():
60
+ labels = labels[:, 1:]
61
+ ```
62
+ * `tokenizer.bos_token_id` vs `model.config.decoder_start_token_id`.<br>
63
+ which one to pass to Data Collator as `decoder_start_token_id` parameter?
64
+ * Answer:
65
+ * In this case, the two are equivalent. You can verify this:
66
+ ```python
67
+ print(tokenizer.bos_token_id)
68
+ print(model.config.decoder_start_token_id)
69
+ ```
70
+
71
+ * Print Output:
72
+ ```
73
+ <|startoftranscript|>
74
+ <|startoftranscript|>
75
+ ```
76
+
77
+ * Technically speaking, the decoder_start_token_id is the correct convention here. Before starting generating any tokens, we initialise the generate method with a starting token, which is the decoder_start_token_id.
78
+ See: https://huggingface.co/blog/how-to-generate. The decoder_start_token_id corresponds to the initial context word sequence, and is the zero'th token generated.
79
+
80
+ * We remove this token from the encoded labels in the data collator because we always set the zero'th generated token to the decoder_start_token_id. If we leave the decoder_start_token_id as part of the label sequence, then we'll predict the decoder_start_token_id as the zero'th token, and again as the first token! Because we're always forcing it as the zero'th token, we don't need to predict it as the first token, and so we remove it from the target lables
81
+
82
+ * These tokens are not forced in the generation process, and so we don't cut them in the data collator. We need to provide them to the model as target labels so that the model can learn the correct tasks from our data
83
+
84
+ * The tokens correspond to the audio language, task (translate or transcribe) and whether to predict timestamps
85
+
86
+ * We need to tell the model what language the audio corresponds to and what task it's performing during fine-tuning. This way, it learns what audio corresponds to what language, and the difference between transcribing audio vs translating it
87
+
88
+ ## Notes:
89
+ * using CommonVoice 11 dataset in a streaming way.<br>
90
+ use `streaming=True` for train & validation & test.<br>
91
+ as an alternative, we can use `streaming=False` for validation & test sets to save time on data processing.
92
+ but the size of validation and test sets are unknown (need to check).
93
+ it's likely they are going to be large - thus pre-download of these sets might not reduce
94
+ overall fine-tuning time compared to streaming mode.
95
+ * size of train set is ~370'000 audiofiles. if using `batch_size=64`, then
96
+ 1 epoch will have ~5782 steps. <br>
97
+ Because of `--eval_steps="1000"` will use `--max_steps="6000"` instead of `--max_steps="5800"`
98
+ to have evaluation metrics computed in the end of training.
99
+ * if using Google Colab, need to execute `sudo chmod -R 777 .git` inside hf repo to
100
+ to set right permissions to be able to push trained models to HuggingFace Hub
101
+ * Whispers BasicTextNormalizer splits words containing apostrophe:
102
+ ```python
103
+ > from transformers.models.whisper.english_normalizer import BasicTextNormalizer
104
+ > normalizer = BasicTextNormalizer()
105
+ > normalizer("раз'яднаць")
106
+ 'раз яднаць'
107
+ ```
108
+ * That's why `BelarusianTextNormalizer` (edited version of `BasicTextNormalizer`) was added to training script:
109
+ ```python
110
+ > from run_speech_recognition_seq2seq_streaming import BelarusianTextNormalizer
111
+ > normalizer_be = BelarusianTextNormalizer()
112
+ > normalizer_be("раз'яднаць")
113
+ "раз'яднаць"
114
+ ```
115
+ * Need to set `use_cache` to False since we're using gradient checkpointing, and the two are incompatible
116
+ * Default Linear scheduler is used
117
+ * Default Adam optimizer is used
118
+ * To save memory (and increase either model or batch_size) can experiment with:
119
+ * using Adafactor instead of Adam.
120
+ Adam requires two optimiser params per one model param, but Adafactor uses only one.
121
+ > A word of caution: Adafactor is untested for fine-tuning Whisper,
122
+ so we are unsure sure how Adafactor performance compares to Adam!
123
+ * using Adam 8bit from `bitsandbytes` module.
124
+ need to provide `optim="adamw_bnb_8bit"` param to `Seq2SeqTrainingArguments`
src/requirements.txt ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ torch>=1.7
2
+ torchaudio
3
+ git+https://github.com/huggingface/transformers
4
+ git+https://github.com/huggingface/datasets
5
+ librosa
6
+ jiwer
7
+ evaluate>=0.3.0
8
+ more-itertools
9
+ tensorboard
src/run.sh CHANGED
@@ -1,42 +1,43 @@
1
- python src/run_speech_recognition_seq2seq_streaming.py \
2
- --model_name_or_path="openai/whisper-small" \
3
- --dataset_name="mozilla-foundation/common_voice_11_0" \
4
- --dataset_config_name="be" \
5
- --language="be" \
6
- --train_split_name="train" \
7
- --eval_split_name="validation" \
8
- --model_index_name="Whisper Small Belarusian" \
9
- \
10
- --max_steps="6000" \
11
- --output_dir="./" \
12
- --per_device_train_batch_size="64" \
13
- --per_device_eval_batch_size="32" \
14
- --logging_steps="50" \
15
- --learning_rate="1e-4" \
16
- --warmup_steps="500" \
17
- --evaluation_strategy="steps" \
18
- --eval_steps="1000" \
19
- --save_strategy="steps" \
20
- --save_steps="1000" \
21
- --gradient_checkpointing \
22
- --fp16 \
23
- \
24
- --shuffle_buffer_size="500" \
25
- --generation_max_length="225" \
26
- --max_duration_in_seconds="30" \
27
- --text_column_name="sentence" \
28
- --freeze_feature_encoder="False" \
29
- --report_to="tensorboard" \
30
- --metric_for_best_model="wer" \
31
- --greater_is_better="False" \
32
- --load_best_model_at_end \
33
- \
34
- --do_train \
35
- --do_eval \
36
- --ignore_data_skip \
37
- --predict_with_generate \
38
- --do_normalize_eval \
39
- --streaming \
40
- --use_auth_token \
41
- --push_to_hub \
42
- --hub_model_id="ales/whisper-small-belarusian"
 
 
1
+ python src/run_speech_recognition_seq2seq_streaming.py \
2
+ --model_name_or_path="openai/whisper-small" \
3
+ --dataset_name="mozilla-foundation/common_voice_11_0" \
4
+ --dataset_config_name="be" \
5
+ --language="be" \
6
+ --train_split_name="train" \
7
+ --eval_split_name="validation" \
8
+ --model_index_name="Whisper Small Belarusian" \
9
+ \
10
+ --max_steps="6000" \
11
+ --output_dir="./" \
12
+ --per_device_train_batch_size="64" \
13
+ --per_device_eval_batch_size="64" \
14
+ --logging_steps="50" \
15
+ --logging_first_step \
16
+ --learning_rate="1e-4" \
17
+ --warmup_steps="500" \
18
+ --evaluation_strategy="steps" \
19
+ --eval_steps="1000" \
20
+ --save_strategy="steps" \
21
+ --save_steps="1000" \
22
+ --gradient_checkpointing \
23
+ --fp16 \
24
+ \
25
+ --shuffle_buffer_size="500" \
26
+ --generation_max_length="225" \
27
+ --max_duration_in_seconds="30" \
28
+ --text_column_name="sentence" \
29
+ --freeze_feature_encoder="False" \
30
+ --report_to="tensorboard" \
31
+ --metric_for_best_model="wer" \
32
+ --greater_is_better="False" \
33
+ --load_best_model_at_end \
34
+ \
35
+ --do_train \
36
+ --do_eval \
37
+ --ignore_data_skip \
38
+ --predict_with_generate \
39
+ --do_normalize_eval \
40
+ --streaming \
41
+ --use_auth_token \
42
+ --push_to_hub \
43
+ --hub_model_id="ales/whisper-small-belarusian"
src/run_debug.sh CHANGED
@@ -7,7 +7,7 @@ python src/run_speech_recognition_seq2seq_streaming.py \
7
  --eval_split_name="validation" \
8
  --model_index_name="Whisper Tiny Belarusian" \
9
  \
10
- --max_steps="300" \
11
  --max_eval_samples="64" \
12
  --output_dir="./" \
13
  --per_device_train_batch_size="32" \
 
7
  --eval_split_name="validation" \
8
  --model_index_name="Whisper Tiny Belarusian" \
9
  \
10
+ --max_steps="500" \
11
  --max_eval_samples="64" \
12
  --output_dir="./" \
13
  --per_device_train_batch_size="32" \
src/setup_env.sh CHANGED
@@ -6,12 +6,20 @@ sudo apt-get install git-lfs
6
 
7
  sudo apt-get install tmux
8
 
9
- python3 -m venv hf_env
10
- source hf_env/bin/activate
11
- echo "source ~/hf_env/bin/activate" >> ~/.bashrc
12
 
13
- git clone https://github.com/huggingface/community-events.git
14
- pip install -r community-events/whisper-fine-tuning-event/requirements.txt
 
 
 
 
15
 
16
  git config --global credential.helper store
17
  huggingface-cli login
 
 
 
 
 
 
6
 
7
  sudo apt-get install tmux
8
 
9
+ cd ~
10
+ echo "executing env setup from $(pwd)"
 
11
 
12
+ python3 -m venv ~/python_venvs/hf_env
13
+ source ~/python_venvs/hf_env/bin/activate
14
+ echo "source ~/python_venvs/hf_env/bin/activate" >> ~/.bashrc
15
+
16
+ git clone https://github.com/yks72p/whisper-finetuning-be
17
+ pip install -r ~/whisper-finetuning-be/requirements.txt
18
 
19
  git config --global credential.helper store
20
  huggingface-cli login
21
+
22
+ echo "env setup"
23
+ echo "! PLEASE LOGIN INTO GIT TO BE ABLE TO PUSH TO HF HUB !"
24
+ echo "> git config --globase user.name <user_name>"
25
+ echo "> git config --globase user.email <user_email>"
train_20221214-101827.log ADDED
@@ -0,0 +1,228 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 12/14/2022 10:18:27 - WARNING - __main__ - Process rank: -1, device: cuda:0, n_gpu: 1distributed training: False, 16-bits training: True
2
+ 12/14/2022 10:18:27 - INFO - __main__ - Training/evaluation parameters Seq2SeqTrainingArguments(
3
+ _n_gpu=1,
4
+ adafactor=False,
5
+ adam_beta1=0.9,
6
+ adam_beta2=0.999,
7
+ adam_epsilon=1e-08,
8
+ auto_find_batch_size=False,
9
+ bf16=False,
10
+ bf16_full_eval=False,
11
+ data_seed=None,
12
+ dataloader_drop_last=False,
13
+ dataloader_num_workers=0,
14
+ dataloader_pin_memory=True,
15
+ ddp_bucket_cap_mb=None,
16
+ ddp_find_unused_parameters=None,
17
+ ddp_timeout=1800,
18
+ debug=[],
19
+ deepspeed=None,
20
+ disable_tqdm=False,
21
+ do_eval=True,
22
+ do_predict=False,
23
+ do_train=True,
24
+ eval_accumulation_steps=None,
25
+ eval_delay=0,
26
+ eval_steps=10,
27
+ evaluation_strategy=steps,
28
+ fp16=True,
29
+ fp16_backend=auto,
30
+ fp16_full_eval=False,
31
+ fp16_opt_level=O1,
32
+ fsdp=[],
33
+ fsdp_min_num_params=0,
34
+ fsdp_transformer_layer_cls_to_wrap=None,
35
+ full_determinism=False,
36
+ generation_max_length=225,
37
+ generation_num_beams=None,
38
+ gradient_accumulation_steps=1,
39
+ gradient_checkpointing=True,
40
+ greater_is_better=False,
41
+ group_by_length=False,
42
+ half_precision_backend=auto,
43
+ hub_model_id=ales/whisper-tiny-be-test,
44
+ hub_private_repo=False,
45
+ hub_strategy=every_save,
46
+ hub_token=<HUB_TOKEN>,
47
+ ignore_data_skip=True,
48
+ include_inputs_for_metrics=False,
49
+ jit_mode_eval=False,
50
+ label_names=None,
51
+ label_smoothing_factor=0.0,
52
+ learning_rate=0.0001,
53
+ length_column_name=length,
54
+ load_best_model_at_end=True,
55
+ local_rank=-1,
56
+ log_level=passive,
57
+ log_level_replica=passive,
58
+ log_on_each_node=True,
59
+ logging_dir=./runs/Dec14_10-18-27_129-213-88-66,
60
+ logging_first_step=True,
61
+ logging_nan_inf_filter=True,
62
+ logging_steps=10,
63
+ logging_strategy=steps,
64
+ lr_scheduler_type=linear,
65
+ max_grad_norm=1.0,
66
+ max_steps=500,
67
+ metric_for_best_model=wer,
68
+ mp_parameters=,
69
+ no_cuda=False,
70
+ num_train_epochs=3.0,
71
+ optim=adamw_hf,
72
+ optim_args=None,
73
+ output_dir=./,
74
+ overwrite_output_dir=False,
75
+ past_index=-1,
76
+ per_device_eval_batch_size=32,
77
+ per_device_train_batch_size=32,
78
+ predict_with_generate=True,
79
+ prediction_loss_only=False,
80
+ push_to_hub=True,
81
+ push_to_hub_model_id=None,
82
+ push_to_hub_organization=None,
83
+ push_to_hub_token=<PUSH_TO_HUB_TOKEN>,
84
+ ray_scope=last,
85
+ remove_unused_columns=True,
86
+ report_to=['tensorboard'],
87
+ resume_from_checkpoint=None,
88
+ run_name=./,
89
+ save_on_each_node=False,
90
+ save_steps=10,
91
+ save_strategy=steps,
92
+ save_total_limit=None,
93
+ seed=42,
94
+ sharded_ddp=[],
95
+ skip_memory_metrics=True,
96
+ sortish_sampler=False,
97
+ tf32=None,
98
+ torch_compile=False,
99
+ torch_compile_backend=None,
100
+ torch_compile_mode=None,
101
+ torchdynamo=None,
102
+ tpu_metrics_debug=False,
103
+ tpu_num_cores=None,
104
+ use_ipex=False,
105
+ use_legacy_prediction_loop=False,
106
+ use_mps_device=False,
107
+ warmup_ratio=0.0,
108
+ warmup_steps=10,
109
+ weight_decay=0.0,
110
+ xpu_backend=None,
111
+ )
112
+ 12/14/2022 10:18:27 - INFO - __main__ - Training/evaluation parameters Seq2SeqTrainingArguments(
113
+ _n_gpu=1,
114
+ adafactor=False,
115
+ adam_beta1=0.9,
116
+ adam_beta2=0.999,
117
+ adam_epsilon=1e-08,
118
+ auto_find_batch_size=False,
119
+ bf16=False,
120
+ bf16_full_eval=False,
121
+ data_seed=None,
122
+ dataloader_drop_last=False,
123
+ dataloader_num_workers=0,
124
+ dataloader_pin_memory=True,
125
+ ddp_bucket_cap_mb=None,
126
+ ddp_find_unused_parameters=None,
127
+ ddp_timeout=1800,
128
+ debug=[],
129
+ deepspeed=None,
130
+ disable_tqdm=False,
131
+ do_eval=True,
132
+ do_predict=False,
133
+ do_train=True,
134
+ eval_accumulation_steps=None,
135
+ eval_delay=0,
136
+ eval_steps=10,
137
+ evaluation_strategy=steps,
138
+ fp16=True,
139
+ fp16_backend=auto,
140
+ fp16_full_eval=False,
141
+ fp16_opt_level=O1,
142
+ fsdp=[],
143
+ fsdp_min_num_params=0,
144
+ fsdp_transformer_layer_cls_to_wrap=None,
145
+ full_determinism=False,
146
+ generation_max_length=225,
147
+ generation_num_beams=None,
148
+ gradient_accumulation_steps=1,
149
+ gradient_checkpointing=True,
150
+ greater_is_better=False,
151
+ group_by_length=False,
152
+ half_precision_backend=auto,
153
+ hub_model_id=ales/whisper-tiny-be-test,
154
+ hub_private_repo=False,
155
+ hub_strategy=every_save,
156
+ hub_token=<HUB_TOKEN>,
157
+ ignore_data_skip=True,
158
+ include_inputs_for_metrics=False,
159
+ jit_mode_eval=False,
160
+ label_names=None,
161
+ label_smoothing_factor=0.0,
162
+ learning_rate=0.0001,
163
+ length_column_name=length,
164
+ load_best_model_at_end=True,
165
+ local_rank=-1,
166
+ log_level=passive,
167
+ log_level_replica=passive,
168
+ log_on_each_node=True,
169
+ logging_dir=./runs/Dec14_10-18-27_129-213-88-66,
170
+ logging_first_step=True,
171
+ logging_nan_inf_filter=True,
172
+ logging_steps=10,
173
+ logging_strategy=steps,
174
+ lr_scheduler_type=linear,
175
+ max_grad_norm=1.0,
176
+ max_steps=500,
177
+ metric_for_best_model=wer,
178
+ mp_parameters=,
179
+ no_cuda=False,
180
+ num_train_epochs=3.0,
181
+ optim=adamw_hf,
182
+ optim_args=None,
183
+ output_dir=./,
184
+ overwrite_output_dir=False,
185
+ past_index=-1,
186
+ per_device_eval_batch_size=32,
187
+ per_device_train_batch_size=32,
188
+ predict_with_generate=True,
189
+ prediction_loss_only=False,
190
+ push_to_hub=True,
191
+ push_to_hub_model_id=None,
192
+ push_to_hub_organization=None,
193
+ push_to_hub_token=<PUSH_TO_HUB_TOKEN>,
194
+ ray_scope=last,
195
+ remove_unused_columns=True,
196
+ report_to=['tensorboard'],
197
+ resume_from_checkpoint=None,
198
+ run_name=./,
199
+ save_on_each_node=False,
200
+ save_steps=10,
201
+ save_strategy=steps,
202
+ save_total_limit=None,
203
+ seed=42,
204
+ sharded_ddp=[],
205
+ skip_memory_metrics=True,
206
+ sortish_sampler=False,
207
+ tf32=None,
208
+ torch_compile=False,
209
+ torch_compile_backend=None,
210
+ torch_compile_mode=None,
211
+ torchdynamo=None,
212
+ tpu_metrics_debug=False,
213
+ tpu_num_cores=None,
214
+ use_ipex=False,
215
+ use_legacy_prediction_loop=False,
216
+ use_mps_device=False,
217
+ warmup_ratio=0.0,
218
+ warmup_steps=10,
219
+ weight_decay=0.0,
220
+ xpu_backend=None,
221
+ )
222
+ 12/14/2022 10:18:27 - INFO - __main__ - output_dir already exists. will try to load last checkpoint.
223
+ 12/14/2022 10:18:27 - INFO - __main__ - last_checkpoint is None. will try to read from training_args.resume_from_checkpoint
224
+ 12/14/2022 10:18:27 - INFO - __main__ - last_checkpoint is None. resume_from_checkpoint is either None or not existing dir. will try to read from the model saved in the root of output_dir.
225
+ 12/14/2022 10:18:27 - INFO - __main__ - found pytorch_model.bin inside output_dir. will continue training treating output_dir as a last checkpoint.
226
+ 12/14/2022 10:18:28 - INFO - datasets.info - Loading Dataset Infos from /home/ubuntu/.cache/huggingface/modules/datasets_modules/datasets/mozilla-foundation--common_voice_11_0/f8e47235d9b4e68fa24ed71d63266a02018ccf7194b2a8c9c598a5f3ab304d9f
227
+ 12/14/2022 10:18:28 - INFO - datasets.info - Loading Dataset Infos from /home/ubuntu/.cache/huggingface/modules/datasets_modules/datasets/mozilla-foundation--common_voice_11_0/f8e47235d9b4e68fa24ed71d63266a02018ccf7194b2a8c9c598a5f3ab304d9f
228
+ 12/14/2022 10:18:31 - WARNING - huggingface_hub.repository - /home/ubuntu/whisper-tiny-be-test/./ is already a clone of https://huggingface.co/ales/whisper-tiny-be-test. Make sure you pull the latest changes with `repo.git_pull()`.
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:350c7e60ed35dd7fe825968ffb77600485c958e44f8b775cc30447b1f2b880aa
3
  size 3643
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1437f82ab7eeff36134dc9d16b104ff12b88d24a6de7414f4fa0a3a341e9c3fd
3
  size 3643