karbolak commited on
Commit
6cac292
·
verified ·
1 Parent(s): 700450a

Update of Jupyter Notebook

Browse files
Poetry_Fusion_using_Llama_3.2.ipynb CHANGED
@@ -55,9 +55,7 @@
55
  "\n",
56
  "We will use a selected dataset that includes various poet styles. This dataset will be preprocessed \n",
57
  "to highlight stylistic characteristics for model fine-tuning, focusing on continuous and meaningful \n",
58
- "poetic text for optimal style fusion.\n",
59
- "\n",
60
- "*Note: Ensure the dataset is accessible and formatted to match model input requirements.*\n"
61
  ]
62
  },
63
  {
@@ -174,7 +172,7 @@
174
  "id": "xNqIYtQcUBSm"
175
  },
176
  "source": [
177
- "Let's also load the tokenizer below"
178
  ]
179
  },
180
  {
@@ -328,7 +326,7 @@
328
  "id": "aTBJVE4PaJwK"
329
  },
330
  "source": [
331
- "Here we will use the [`SFTTrainer` from TRL library](https://huggingface.co/docs/trl/main/en/sft_trainer) that gives a wrapper around transformers `Trainer` to easily fine-tune models on instruction based datasets using PEFT adapters. Let's first load the training arguments below. We choose sepcific arguments for our usecase."
332
  ]
333
  },
334
  {
@@ -483,7 +481,7 @@
483
  "id": "JjvisllacNZM"
484
  },
485
  "source": [
486
- "Now let's train the model! Simply call `trainer.train()`"
487
  ]
488
  },
489
  {
 
55
  "\n",
56
  "We will use a selected dataset that includes various poet styles. This dataset will be preprocessed \n",
57
  "to highlight stylistic characteristics for model fine-tuning, focusing on continuous and meaningful \n",
58
+ "poetic text for optimal style fusion."
 
 
59
  ]
60
  },
61
  {
 
172
  "id": "xNqIYtQcUBSm"
173
  },
174
  "source": [
175
+ "The tokenizer is loaded below."
176
  ]
177
  },
178
  {
 
326
  "id": "aTBJVE4PaJwK"
327
  },
328
  "source": [
329
+ "Here we will use the [`SFTTrainer` from TRL library](https://huggingface.co/docs/trl/main/en/sft_trainer) that gives a wrapper around transformers `Trainer` to easily fine-tune models on instruction based datasets using PEFT adapters. The training arguments are loaded below. We choose sepcific arguments for our usecase."
330
  ]
331
  },
332
  {
 
481
  "id": "JjvisllacNZM"
482
  },
483
  "source": [
484
+ "Now let's train the model! Simply call `trainer.train()`. It will require wandb.ai API key."
485
  ]
486
  },
487
  {