Commit
·
eb3febd
1
Parent(s):
1265348
Update README.md
Browse files
README.md
CHANGED
@@ -32,22 +32,14 @@ library_name: transformers
|
|
32 |
|
33 |
## Usage
|
34 |
|
35 |
-
|
|
|
36 |
|
37 |
-
```
|
38 |
-
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/davanstrien/autotrain-testblog-64771135885
|
39 |
-
```
|
40 |
-
|
41 |
-
Or Python API:
|
42 |
|
43 |
```
|
44 |
-
from transformers import
|
45 |
-
|
46 |
-
model = AutoModelForSequenceClassification.from_pretrained("davanstrien/autotrain-testblog-64771135885", use_auth_token=True)
|
47 |
-
|
48 |
-
tokenizer = AutoTokenizer.from_pretrained("davanstrien/autotrain-testblog-64771135885", use_auth_token=True)
|
49 |
-
|
50 |
-
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
|
51 |
|
52 |
-
|
|
|
|
|
53 |
```
|
|
|
32 |
|
33 |
## Usage
|
34 |
|
35 |
+
The easiest way to use this model locally is via the [Transformers](https://huggingface.co/docs/transformers/index) library [pipelines for inference](https://huggingface.co/docs/transformers/pipeline_tutorial).
|
36 |
+
Once you have [installed transformers](https://huggingface.co/docs/transformers/installation) you can run the following code. This will download and cache the model locally and allow you to make predictions on text input.
|
37 |
|
|
|
|
|
|
|
|
|
|
|
38 |
|
39 |
```
|
40 |
+
>>>from transformers import pipeline
|
|
|
|
|
|
|
|
|
|
|
|
|
41 |
|
42 |
+
>>>classifier = pipeline('text-classification', "davanstrien/autotrain-beyond-the-books")
|
43 |
+
>>>classifier(text)
|
44 |
+
[{'label': 'no_jim_crow', 'score': 0.9718555212020874}]
|
45 |
```
|