Add evaluation results on the mrpc config and validation split of glue
Browse filesBeep boop, I am a bot from Hugging Face's automatic model evaluator 👋!\
Your model has been evaluated on the mrpc config and validation split of the [glue](https://huggingface.co/datasets/glue) dataset by
@lewtun
, using the predictions stored [here](https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-fa97c361-989b-438c-bd2b-73aa1588c214-5654).\
Accept this pull request to see the results displayed on the [Hub leaderboard](https://huggingface.co/spaces/autoevaluate/leaderboards?dataset=glue).\
Evaluate your model on more datasets [here](https://huggingface.co/spaces/autoevaluate/model-evaluator?dataset=glue).
README.md
CHANGED
@@ -11,8 +11,8 @@ model-index:
|
|
11 |
- name: natural-language-inference
|
12 |
results:
|
13 |
- task:
|
14 |
-
name: Text Classification
|
15 |
type: text-classification
|
|
|
16 |
dataset:
|
17 |
name: glue
|
18 |
type: glue
|
@@ -20,12 +20,12 @@ model-index:
|
|
20 |
split: train
|
21 |
args: mrpc
|
22 |
metrics:
|
23 |
-
-
|
24 |
-
type: accuracy
|
25 |
value: 0.8284313725490197
|
26 |
-
|
27 |
-
|
28 |
value: 0.8821548821548822
|
|
|
29 |
---
|
30 |
|
31 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
|
|
11 |
- name: natural-language-inference
|
12 |
results:
|
13 |
- task:
|
|
|
14 |
type: text-classification
|
15 |
+
name: Text Classification
|
16 |
dataset:
|
17 |
name: glue
|
18 |
type: glue
|
|
|
20 |
split: train
|
21 |
args: mrpc
|
22 |
metrics:
|
23 |
+
- type: accuracy
|
|
|
24 |
value: 0.8284313725490197
|
25 |
+
name: Accuracy
|
26 |
+
- type: f1
|
27 |
value: 0.8821548821548822
|
28 |
+
name: F1
|
29 |
---
|
30 |
|
31 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|