Update README.md
Browse files
README.md
CHANGED
@@ -27,7 +27,7 @@ co2_eq_emissions: 0.004032656988228696
|
|
27 |
You can use cURL to access this model:
|
28 |
|
29 |
```
|
30 |
-
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "Why is the username the largest part of each card?"}' https://api-inference.huggingface.co/models/Shenzy2/
|
31 |
```
|
32 |
|
33 |
Or Python API:
|
@@ -35,9 +35,9 @@ Or Python API:
|
|
35 |
```
|
36 |
from transformers import AutoModelForTokenClassification, AutoTokenizer
|
37 |
|
38 |
-
model = AutoModelForTokenClassification.from_pretrained("Shenzy2/
|
39 |
|
40 |
-
tokenizer = AutoTokenizer.from_pretrained("Shenzy2/
|
41 |
|
42 |
inputs = tokenizer("Why is the username the largest part of each card?", return_tensors="pt")
|
43 |
|
|
|
27 |
You can use cURL to access this model:
|
28 |
|
29 |
```
|
30 |
+
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "Why is the username the largest part of each card?"}' https://api-inference.huggingface.co/models/Shenzy2/NER4DesignTutor
|
31 |
```
|
32 |
|
33 |
Or Python API:
|
|
|
35 |
```
|
36 |
from transformers import AutoModelForTokenClassification, AutoTokenizer
|
37 |
|
38 |
+
model = AutoModelForTokenClassification.from_pretrained("Shenzy2/NER4DesignTutor", use_auth_token=True)
|
39 |
|
40 |
+
tokenizer = AutoTokenizer.from_pretrained("Shenzy2/NER4DesignTutor", use_auth_token=True)
|
41 |
|
42 |
inputs = tokenizer("Why is the username the largest part of each card?", return_tensors="pt")
|
43 |
|