Update README.md
Browse files
README.md
CHANGED
@@ -57,7 +57,7 @@ Here's how to use the model:
|
|
57 |
|
58 |
>> input_text = "POST: Instacart gave me 50 pounds of limes instead of 5 pounds... what the hell do I do with 50 pounds of limes? I've already donated a bunch and gave a bunch away. I'm planning on making a bunch of lime-themed cocktails, but... jeez. Ceviche? \n\n RESPONSE A: Lime juice, and zest, then freeze in small quantities.\n\n RESPONSE B: Lime marmalade lol\n\n Which response is better? RESPONSE"
|
59 |
>> x = tokenizer([input_text], return_tensors='pt').input_ids.to(device)
|
60 |
-
>> y = model.generate(x)
|
61 |
>> tokenizer.batch_decode(y, skip_special_tokens=True)
|
62 |
['A']
|
63 |
```
|
|
|
57 |
|
58 |
>> input_text = "POST: Instacart gave me 50 pounds of limes instead of 5 pounds... what the hell do I do with 50 pounds of limes? I've already donated a bunch and gave a bunch away. I'm planning on making a bunch of lime-themed cocktails, but... jeez. Ceviche? \n\n RESPONSE A: Lime juice, and zest, then freeze in small quantities.\n\n RESPONSE B: Lime marmalade lol\n\n Which response is better? RESPONSE"
|
59 |
>> x = tokenizer([input_text], return_tensors='pt').input_ids.to(device)
|
60 |
+
>> y = model.generate(x, max_new_tokens=1)
|
61 |
>> tokenizer.batch_decode(y, skip_special_tokens=True)
|
62 |
['A']
|
63 |
```
|