Weizhe Yuan
commited on
Commit
·
1e75677
1
Parent(s):
597c744
Update README.md
Browse files
README.md
CHANGED
@@ -25,19 +25,19 @@ We release all models introduced in our [paper](https://arxiv.org/pdf/2206.11147
|
|
25 |
|
26 |
| Model | Description | Recommended Application
|
27 |
| ----------- | ----------- |----------- |
|
28 |
-
| rst-all-11b | Trained with all the signals below except signals that are used to train Gaokao models | All applications below |
|
29 |
-
| rst-fact-retrieval-11b | Trained with the following signals: WordNet meaning, WordNet part-of-speech, WordNet synonym, WordNet antonym, wikiHow category hierarchy, Wikidata relation, Wikidata entity typing, Paperswithcode entity typing |
|
30 |
-
| rst-summarization-11b | Trained with the following signals: DailyMail summary, Paperswithcode summary, arXiv summary, wikiHow summary | Summarization |
|
31 |
-
| rst-temporal-reasoning-11b | Trained with the following signals: DailyMail temporal information, wikiHow procedure | Temporal reasoning |
|
32 |
-
| **rst-information-extraction-11b** | **Trained with the following signals: Paperswithcode entity, Paperswithcode entity typing, Wikidata entity typing, Wikidata relation, Wikipedia entity** | **Named entity recognition, relation extraction**|
|
33 |
-
| rst-intent-detection-11b | Trained with the following signals: wikiHow goal-step relation | Intent prediction |
|
34 |
-
| rst-topic-classification-11b | Trained with the following signals: DailyMail category, arXiv category, wikiHow text category, Wikipedia section title |
|
35 |
-
| rst-word-sense-disambiguation-11b | Trained with the following signals: WordNet meaning, WordNet part-of-speech, WordNet synonym, WordNet antonym | Word sense disambiguation, part-of-speech tagging |
|
36 |
-
| rst-natural-language-inference-11b | Trained with the following signals: ConTRoL dataset, DREAM dataset, LogiQA dataset, RACE & RACE-C dataset, ReClor dataset, DailyMail temporal information | Natural language inference, multiple-choice question answering |
|
37 |
-
| rst-sentiment-classification-11b | Trained with the following signals: Rotten Tomatoes sentiment, Wikipedia sentiment | Sentiment
|
38 |
-
| rst-gaokao-rc-11b | Trained with multiple-choice QA datasets that are used to train the [T0pp](https://huggingface.co/bigscience/T0pp) model |
|
39 |
-
| rst-gaokao-cloze-11b | Trained with manually crafted cloze datasets |
|
40 |
-
| rst-gaokao-writing-11b | Trained with example essays from past Gaokao-English exams and grammar error correction signals | Essay writing, grammar error correction |
|
41 |
|
42 |
|
43 |
|
|
|
25 |
|
26 |
| Model | Description | Recommended Application
|
27 |
| ----------- | ----------- |----------- |
|
28 |
+
| **rst-all-11b** | **Trained with all the signals below except signals that are used to train Gaokao models** | **All applications below** (specialized models are recommended first if high performance is preferred) |
|
29 |
+
| rst-fact-retrieval-11b | Trained with the following signals: WordNet meaning, WordNet part-of-speech, WordNet synonym, WordNet antonym, wikiHow category hierarchy, Wikidata relation, Wikidata entity typing, Paperswithcode entity typing | Knowledge intensive tasks, information extraction tasks,factual checker |
|
30 |
+
| rst-summarization-11b | Trained with the following signals: DailyMail summary, Paperswithcode summary, arXiv summary, wikiHow summary | Summarization or other general generation tasks, meta-evaluation (e.g., BARTScore) |
|
31 |
+
| rst-temporal-reasoning-11b | Trained with the following signals: DailyMail temporal information, wikiHow procedure | Temporal reasoning, relation extraction, event-based extraction |
|
32 |
+
| **rst-information-extraction-11b** | **Trained with the following signals: Paperswithcode entity, Paperswithcode entity typing, Wikidata entity typing, Wikidata relation, Wikipedia entity** | **Named entity recognition, relation extraction and other general IE tasks in the news, scientific or other domains**|
|
33 |
+
| rst-intent-detection-11b | Trained with the following signals: wikiHow goal-step relation | Intent prediction, event prediction |
|
34 |
+
| rst-topic-classification-11b | Trained with the following signals: DailyMail category, arXiv category, wikiHow text category, Wikipedia section title | general text classification |
|
35 |
+
| rst-word-sense-disambiguation-11b | Trained with the following signals: WordNet meaning, WordNet part-of-speech, WordNet synonym, WordNet antonym | Word sense disambiguation, part-of-speech tagging, general IE tasks, common sense reasoning |
|
36 |
+
| rst-natural-language-inference-11b | Trained with the following signals: ConTRoL dataset, DREAM dataset, LogiQA dataset, RACE & RACE-C dataset, ReClor dataset, DailyMail temporal information | Natural language inference, multiple-choice question answering, reasoning |
|
37 |
+
| rst-sentiment-classification-11b | Trained with the following signals: Rotten Tomatoes sentiment, Wikipedia sentiment | Sentiment classification, emotion classification |
|
38 |
+
| rst-gaokao-rc-11b | Trained with multiple-choice QA datasets that are used to train the [T0pp](https://huggingface.co/bigscience/T0pp) model | General multiple-choice question answering|
|
39 |
+
| rst-gaokao-cloze-11b | Trained with manually crafted cloze datasets | General cloze filling|
|
40 |
+
| rst-gaokao-writing-11b | Trained with example essays from past Gaokao-English exams and grammar error correction signals | Essay writing, story generation, grammar error correction and other text generation tasks |
|
41 |
|
42 |
|
43 |
|