Text Generation
Transformers
PyTorch
English
gpt2
causal-lm
text-generation-inference
Inference Endpoints
mikeendale commited on
Commit
4c8cd90
·
verified ·
1 Parent(s): 0e4059d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -33
README.md CHANGED
@@ -59,39 +59,6 @@ Related models: [Cerebras-GPT Models](https://huggingface.co/models?sort=downloa
59
 
60
  <br><br>
61
 
62
- ## Quickstart
63
-
64
- This model can be easily loaded using the AutoModelForCausalLM functionality:
65
- ```python
66
- from transformers import AutoTokenizer, AutoModelForCausalLM
67
-
68
- tokenizer = AutoTokenizer.from_pretrained("cerebras/Cerebras-GPT-111M")
69
- model = AutoModelForCausalLM.from_pretrained("cerebras/Cerebras-GPT-111M")
70
-
71
- text = "Generative AI is "
72
- ```
73
-
74
- And can be used with Hugging Face Pipelines
75
-
76
- ```python
77
- from transformers import pipeline
78
-
79
- pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
80
- generated_text = pipe(text, max_length=50, do_sample=False, no_repeat_ngram_size=2)[0]
81
- print(generated_text['generated_text'])
82
- ```
83
-
84
- or with `model.generate()`
85
-
86
- ```python
87
- inputs = tokenizer(text, return_tensors="pt")
88
- outputs = model.generate(**inputs, num_beams=5,
89
- max_new_tokens=50, early_stopping=True,
90
- no_repeat_ngram_size=2)
91
- text_output = tokenizer.batch_decode(outputs, skip_special_tokens=True)
92
- print(text_output[0])
93
- ```
94
- <br><br>
95
 
96
  ## Training data
97
 
 
59
 
60
  <br><br>
61
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
62
 
63
  ## Training data
64