|
--- |
|
library_name: transformers |
|
tags: [] |
|
--- |
|
|
|
## Model Details |
|
|
|
### Model Description |
|
|
|
<!-- Provide a longer summary of what this model is. --> |
|
|
|
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. |
|
|
|
- **Developed by:** FrankL |
|
- **Language(s) (NLP):** English |
|
|
|
|
|
### Direct Use |
|
|
|
```python |
|
model = AutoModelForCausalLM.from_pretrained('FrankL/storytellerLM-v0.1', trust_remote_code=True, torch_dtype=torch.float16) |
|
model = model.to(device='cuda') |
|
|
|
tokenizer = AutoTokenizer.from_pretrained('FrankL/storytellerLM-v0.1', trust_remote_code=True) |
|
def inference( |
|
model: AutoModelForCausalLM, |
|
tokenizer: AutoTokenizer, |
|
input_text: str = "Once upon a time, ", |
|
max_new_tokens: int = 16 |
|
): |
|
inputs = tokenizer(input_text, return_tensors="pt").to(device) |
|
outputs = model.generate( |
|
**inputs, |
|
pad_token_id=tokenizer.eos_token_id, |
|
max_new_tokens=max_new_tokens, |
|
do_sample=True, |
|
top_k=40, |
|
top_p=0.95, |
|
temperature=0.8 |
|
) |
|
generated_text = tokenizer.decode( |
|
outputs[0], |
|
skip_special_tokens=True |
|
) |
|
# print(outputs) |
|
print(generated_text) |
|
|
|
inference(model, tokenizer) |
|
``` |