model update
Browse files
README.md
CHANGED
@@ -17,7 +17,7 @@ widget:
|
|
17 |
- text: "Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records."
|
18 |
example_title: "Questions & Answers Generation Example 1"
|
19 |
model-index:
|
20 |
-
- name:
|
21 |
results:
|
22 |
- task:
|
23 |
name: Text2text Generation
|
@@ -62,7 +62,7 @@ model-index:
|
|
62 |
value: 65.68
|
63 |
---
|
64 |
|
65 |
-
# Model Card of `
|
66 |
This model is fine-tuned version of [t5-base](https://huggingface.co/t5-base) for question & answer pair generation task on the [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
|
67 |
This model is fine-tuned without a task prefix.
|
68 |
|
@@ -80,7 +80,7 @@ This model is fine-tuned without a task prefix.
|
|
80 |
from lmqg import TransformersQG
|
81 |
|
82 |
# initialize model
|
83 |
-
model = TransformersQG(language="en", model="
|
84 |
|
85 |
# model prediction
|
86 |
question_answer_pairs = model.generate_qa("William Turner was an English painter who specialised in watercolour landscapes")
|
@@ -91,7 +91,7 @@ question_answer_pairs = model.generate_qa("William Turner was an English painter
|
|
91 |
```python
|
92 |
from transformers import pipeline
|
93 |
|
94 |
-
pipe = pipeline("text2text-generation", "
|
95 |
output = pipe("Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.")
|
96 |
|
97 |
```
|
@@ -99,7 +99,7 @@ output = pipe("Beyonce further expanded her acting career, starring as blues sin
|
|
99 |
## Evaluation
|
100 |
|
101 |
|
102 |
-
- ***Metric (Question & Answer Generation)***: [raw metric file](https://huggingface.co/
|
103 |
|
104 |
| | Score | Type | Dataset |
|
105 |
|:--------------------------------|--------:|:--------|:---------------------------------------------------------------------|
|
@@ -139,7 +139,7 @@ The following hyperparameters were used during fine-tuning:
|
|
139 |
- gradient_accumulation_steps: 2
|
140 |
- label_smoothing: 0.0
|
141 |
|
142 |
-
The full configuration can be found at [fine-tuning config file](https://huggingface.co/
|
143 |
|
144 |
## Citation
|
145 |
```
|
|
|
17 |
- text: "Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records."
|
18 |
example_title: "Questions & Answers Generation Example 1"
|
19 |
model-index:
|
20 |
+
- name: research-backup/t5-base-tweetqa-qag-np
|
21 |
results:
|
22 |
- task:
|
23 |
name: Text2text Generation
|
|
|
62 |
value: 65.68
|
63 |
---
|
64 |
|
65 |
+
# Model Card of `research-backup/t5-base-tweetqa-qag-np`
|
66 |
This model is fine-tuned version of [t5-base](https://huggingface.co/t5-base) for question & answer pair generation task on the [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
|
67 |
This model is fine-tuned without a task prefix.
|
68 |
|
|
|
80 |
from lmqg import TransformersQG
|
81 |
|
82 |
# initialize model
|
83 |
+
model = TransformersQG(language="en", model="research-backup/t5-base-tweetqa-qag-np")
|
84 |
|
85 |
# model prediction
|
86 |
question_answer_pairs = model.generate_qa("William Turner was an English painter who specialised in watercolour landscapes")
|
|
|
91 |
```python
|
92 |
from transformers import pipeline
|
93 |
|
94 |
+
pipe = pipeline("text2text-generation", "research-backup/t5-base-tweetqa-qag-np")
|
95 |
output = pipe("Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.")
|
96 |
|
97 |
```
|
|
|
99 |
## Evaluation
|
100 |
|
101 |
|
102 |
+
- ***Metric (Question & Answer Generation)***: [raw metric file](https://huggingface.co/research-backup/t5-base-tweetqa-qag-np/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qag_tweetqa.default.json)
|
103 |
|
104 |
| | Score | Type | Dataset |
|
105 |
|:--------------------------------|--------:|:--------|:---------------------------------------------------------------------|
|
|
|
139 |
- gradient_accumulation_steps: 2
|
140 |
- label_smoothing: 0.0
|
141 |
|
142 |
+
The full configuration can be found at [fine-tuning config file](https://huggingface.co/research-backup/t5-base-tweetqa-qag-np/raw/main/trainer_config.json).
|
143 |
|
144 |
## Citation
|
145 |
```
|