philipp-zettl commited on
Commit
6cf8c56
Β·
verified Β·
1 Parent(s): 8a1c441

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +231 -3
README.md CHANGED
@@ -1,3 +1,231 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ datasets:
4
+ - philipp-zettl/qg-tydiqa_squad2
5
+ language:
6
+ - en
7
+ library_name: transformers
8
+ pipeline_tag: text2text-generation
9
+ widget:
10
+ - text: "context: The Hugging Face Hub is a
11
+ platform with over 350k models, 75k datasets, and 150k demo apps (Spaces),
12
+ all open source and publicly available, in an online platform where people
13
+ can easily collaborate and build ML together. The Hub works as a central
14
+ place where anyone can explore, experiment, collaborate, and build
15
+ technology with Machine Learning. Are you ready to join the path towards
16
+ open source Machine Learning? πŸ€—"
17
+ example_title: πŸ€— Hub
18
+ - text: "context: πŸ€— Datasets is a library
19
+ for easily accessing and sharing datasets for Audio, Computer Vision, and
20
+ Natural Language Processing (NLP) tasks. Load a dataset in a single line
21
+ of code, and use our powerful data processing methods to quickly get your
22
+ dataset ready for training in a deep learning model. Backed by the Apache
23
+ Arrow format, process large datasets with zero-copy reads without any
24
+ memory constraints for optimal speed and efficiency. We also feature a
25
+ deep integration with the Hugging Face Hub, allowing you to easily load
26
+ and share a dataset with the wider machine learning community. Find your
27
+ dataset today on the Hugging Face Hub, and take an in-depth look inside of
28
+ it with the live viewer."
29
+ example_title: πŸ€— datasets
30
+ ---
31
+
32
+ # Model Card for t5-small-qg
33
+
34
+ <!-- Provide a quick summary of what the model is/does. -->
35
+
36
+ ## Model Details
37
+
38
+ ### Model Description
39
+
40
+ <!-- Provide a longer summary of what this model is. -->
41
+ This model was trained to generate questions out of a given context.
42
+
43
+
44
+ - **Developed by:** [philipp-zettl](https://huggingface.co/philipp-zettl)
45
+ - **Model type:** Transformer (T5)
46
+ - **Language(s) (NLP):** English
47
+ - **License:** M.I.T
48
+ - **Finetuned from model [optional]:** [google/flan-t5-small](https://huggingface.co/google/flan-t5-small)
49
+
50
+ ### Model Sources [optional]
51
+
52
+ <!-- Provide the basic links for the model. -->
53
+ Fine-tune of the amazing [google/flan-t5-small](https://huggingface.co/google/flan-t5-small)
54
+
55
+ ## Uses
56
+
57
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
58
+ It's intended to use the model to generate questions from given context.
59
+ The context should not exceed the model's _context_ length.
60
+
61
+ ## Bias, Risks, and Limitations
62
+
63
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
64
+
65
+ No bias evaluation was performed on this model.
66
+
67
+ ## How to Get Started with the Model
68
+
69
+ Use the code below to get started with the model.
70
+
71
+ ```python
72
+ context = "This is a long text based of multiple concatenated paragraphs."
73
+
74
+ model_inputs = tokenizer([f"context: {context}"], max_length=512, padding=True, truncation=True)
75
+ input_ids = torch.tensor(model_inputs['input_ids']).to(device)
76
+ attention_mask = torch.tensor(model_inputs['attention_mask']).to(device)
77
+ with torch.no_grad():
78
+ sample_output = model.generate(input_ids[:1], max_length=85)
79
+ sample_output_text = tokenizer.decode(sample_output[0], skip_special_tokens=True)
80
+ input_text = tokenizer.decode(input_ids[0], skip_special_tokens=True)
81
+ print(f"Sample Input:\n \"{input_text}\"\n\n")
82
+ print(f"Model Output: \"{sample_output_text}\"")
83
+ ```
84
+
85
+ ## Training Details
86
+
87
+ ### Training Data
88
+
89
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
90
+
91
+ This model was trained on [philipp-zettl/qg-tydiqa_squad2](https://huggingface.co/datasets/philipp-zettl/qg-tydiqa_squad2).
92
+
93
+ The training data was collected by combining [philipp-zettl/tydiqa-task_2-english](https://huggingface.co/datasets/philipp-zettl/tydiqa-task_2-english) with [nvidia/ChatQA-Training-Data#squad2.0](https://huggingface.co/datasets/nvidia/ChatQA-Training-Data/viewer/squad2.0).
94
+
95
+ From each base dataset we selected the `context` and `question` attributes of each sample. Then joined them together into [philipp-zettl/qg-tydiqa_squad2](https://huggingface.co/datasets/philipp-zettl/qg-tydiqa_squad2).
96
+
97
+ ### Training Procedure
98
+
99
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
100
+ Below you can find the full training pipeline used to achieve this fine-tune.
101
+
102
+ ```python
103
+ import torch
104
+ from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
105
+
106
+ # Base model (e.g., T5-large)
107
+ # https://huggingface.co/collections/google/flan-t5-release-65005c39e3201fff885e22fb
108
+ model_name = 'google/flan-t5-small'
109
+ model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
110
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
111
+
112
+ # Move only the student model to GPU if available
113
+ device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
114
+ model = model.to(device)
115
+ ```
116
+
117
+ Load dataset
118
+ ```
119
+ from datasets import load_dataset
120
+
121
+ # Load dataset
122
+ squad_dataset = load_dataset('philipp-zettl/qg-tydiqa_squad2')
123
+
124
+ # Split the dataset into training and validation
125
+ train_dataset = squad_dataset['train']
126
+ validation_dataset = squad_dataset['test']
127
+ ```
128
+
129
+ Preprocessing: tokenize inputs and labels for faster training cycles, i.e. no need for tokenization during training anymore
130
+ ```
131
+ def preprocess_batch(batch, tokenizer, max_input_length=512, max_output_length=128):
132
+ contexts = batch['context']
133
+ answers = batch['question']
134
+
135
+ inputs = [f"context: {c}" for c in contexts]
136
+ model_inputs = tokenizer(inputs, max_length=max_input_length, padding=True, truncation=True)
137
+
138
+ labels = tokenizer(answers, max_length=max_output_length, padding=True, truncation=True)
139
+ model_inputs['labels'] = labels['input_ids']
140
+
141
+ return model_inputs
142
+
143
+ # Tokenize the dataset
144
+ train_dataset = train_dataset.map(lambda batch: preprocess_batch(batch, tokenizer), batched=True)
145
+ validation_dataset = validation_dataset.map(lambda batch: preprocess_batch(batch, tokenizer), batched=True)
146
+
147
+ # Set format for PyTorch
148
+ train_dataset.set_format(type='torch', columns=['input_ids', 'attention_mask', 'labels'])
149
+ validation_dataset.set_format(type='torch', columns=['input_ids', 'attention_mask', 'labels'])
150
+ ```
151
+
152
+ The train loop
153
+ ```python
154
+ from tqdm import tqdm
155
+ from transformers import AdamW, DataCollatorForSeq2Seq
156
+ from torch.utils.data import DataLoader
157
+ from torch.utils.tensorboard import SummaryWriter
158
+
159
+ torch.cuda.empty_cache()
160
+
161
+ model.to(device)
162
+
163
+ # Training parameters
164
+ epochs = 3
165
+ learning_rate = 5e-5
166
+ temperature = 2.0
167
+ batch_size = 8
168
+ optimizer = torch.optim.AdamW(model.parameters(), lr=learning_rate)
169
+
170
+ # Create a data collator for padding and batching
171
+ data_collator = DataCollatorForSeq2Seq(tokenizer=tokenizer, model=model)
172
+
173
+ # Create DataLoaders with the data collator
174
+ train_dataloader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True, collate_fn=data_collator)
175
+ validation_dataloader = DataLoader(validation_dataset, batch_size=batch_size, collate_fn=data_collator)
176
+
177
+ writer = SummaryWriter(comment='t5-small-qg')
178
+
179
+ print("Starting training...")
180
+
181
+ # Training loop
182
+ for epoch in range(epochs):
183
+ model.train()
184
+ total_loss = 0
185
+ print(f"Epoch {epoch+1}/{epochs}")
186
+
187
+ progress_bar = tqdm(train_dataloader, desc="Training", leave=False)
188
+
189
+ for step, batch in enumerate(progress_bar):
190
+ input_ids = batch['input_ids'].to(device)
191
+ attention_mask = batch['attention_mask'].to(device)
192
+ labels = batch['labels'].to(device)
193
+
194
+ outputs = model(input_ids=input_ids, attention_mask=attention_mask, labels=labels)
195
+ logits = outputs.logits
196
+
197
+ # Calculate losses
198
+ loss = outputs.loss # Cross-entropy loss
199
+ writer.add_scalar("Loss/train", loss, step)
200
+
201
+ # Backpropagation
202
+ optimizer.zero_grad()
203
+ loss.backward()
204
+ optimizer.step()
205
+
206
+ total_loss += loss.item()
207
+
208
+ # Verbose logging
209
+ if step % 100 == 1 or step == len(train_dataloader) - 1:
210
+ progress_bar.set_postfix({
211
+ 'step': step,
212
+ 'loss': loss.item(),
213
+ })
214
+
215
+ # Generate a sample output from the student model
216
+ model.eval()
217
+ with torch.no_grad():
218
+ sample_output = model.generate(input_ids[:1], max_length=50)
219
+ sample_output_text = tokenizer.decode(sample_output[0], skip_special_tokens=True)
220
+ input_text = tokenizer.decode(input_ids[0], skip_special_tokens=True)
221
+ writer.add_text(f"Sample Input", input_text, step)
222
+ writer.add_text(f"Sample Output", sample_output_text, step)
223
+ model.train()
224
+
225
+ avg_loss = total_loss / len(train_dataloader)
226
+ print(f"Epoch {epoch+1} completed. Average Loss: {avg_loss:.4f}")
227
+ writer.add_scalar("AVG Loss/train", avg_loss, epoch)
228
+
229
+ print("Training complete.")
230
+ writer.close()
231
+ ```