Text Generation
Transformers
Safetensors
llama
text-generation-inference
Inference Endpoints
mfromm commited on
Commit
548ab9a
·
verified ·
1 Parent(s): 238e48d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -126,7 +126,7 @@ prediction = model.generate(
126
  temperature=0.7,
127
  num_return_sequences=1,
128
  )
129
- prediction_text = tokenizer.decode(prediction[0])
130
  print(prediction_text)
131
  ```
132
 
 
126
  temperature=0.7,
127
  num_return_sequences=1,
128
  )
129
+ prediction_text = tokenizer.decode(prediction[0].tolist())
130
  print(prediction_text)
131
  ```
132