File size: 6,333 Bytes
a47247b 651bb9a a47247b 651bb9a a47247b 651bb9a a47247b b3bab36 a47247b 651bb9a a47247b 651bb9a a47247b 651bb9a b3bab36 362ba1f 344af4b a47247b 362ba1f 1f19853 362ba1f 1f19853 362ba1f d41626c 362ba1f 1f19853 362ba1f d41626c 362ba1f d41626c a8d814e d41626c 0024d43 d41626c 362ba1f a8d814e d41626c 362ba1f d41626c 362ba1f d41626c 362ba1f d41626c 362ba1f d41626c 362ba1f d41626c 362ba1f d41626c 362ba1f d41626c 362ba1f d41626c 362ba1f 1f19853 362ba1f d41626c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 |
---
dataset_info:
features:
- name: id
dtype: string
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: answer
dtype: string
- name: dataset
dtype: string
- name: translated
dtype: bool
- name: input
dtype: string
- name: output
dtype: string
- name: conversations
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 864378584
num_examples: 923646
- name: validation
num_bytes: 25124637
num_examples: 23541
- name: test
num_bytes: 23873116
num_examples: 23023
download_size: 285523301
dataset_size: 913376337
configs:
- config_name: en
data_files:
- split: train
path: en/train-*
- split: validation
path: en/validation-*
- split: test
path: en/test-*
- config_name: cs
data_files:
- split: train
path: cs/train-*
- split: validation
path: cs/validation-*
- split: test
path: cs/test-*
license: cc-by-nc-sa-4.0
task_categories:
- text-generation
- text-classification
language:
- cs
- en
tags:
- NLI
size_categories:
- 1M<n<10M
source_datasets:
- ctu-aic/enfever_nli
- facebook/anli
- stanfordnlp/snli
- chenxwh/AVeriTeC
- ctu-aic/anli_cs
- ctu-aic/snli_cs
- ctu-aic/csfever_nli
- ctu-aic/ctkfacts_nli
multilinguality:
- multilingual
---
# Dataset Card for Natural Language Inference Instruction Tuning Collection
<!-- Provide a quick summary of the dataset. -->
This dataset is a collection of various NLI datasets in Czech and English, transformed into an instruction tuning format based on the FLAN approach.
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
This dataset is a collection of English and Czech NLI datasets. Its primary purpose is instruction tuning (supervised fine tuning) of decoder LLMs. The used datasets were converted using a FLAN-like templates.
- **Curated by:** Artificial Intelligence Center, FEE, CTU in Prague
- **Language(s) (NLP):** Czech (cs, ces), English (en)
- **License:** [cc-by-nc-sa-4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/deed)
### Dataset Sources
<!-- Provide the basic links for the dataset. -->
The dataset consists of the following datasets:
**English** 🇺🇸 🇬🇧
- [FEVER](https://huggingface.co/datasets/ctu-aic/enfever_nli) - FEVER transformed for NLI
- [AVeriTeC](https://huggingface.co/chenxwh/AVeriTeC) - train and development gold splits with concatenated question-answer pairs as the evidence
- [SNLI](https://huggingface.co/datasets/stanfordnlp/snli)
- [ANLI](https://huggingface.co/datasets/facebook/anli)
**Czech** 🇨🇿
- [CsFEVER-NLI](https://huggingface.co/datasets/ctu-aic/csfever_nli) - FEVER translated to Czech using Deepl translator
- [CtkFACTS-NLI](https://huggingface.co/datasets/ctu-aic/ctkfacts_nli) - Original Czech NLI dataset
- [SNLI_CS](https://huggingface.co/datasets/ctu-aic/snli_cs) - SNLI translated to Czech using Google translator
- [ANLI_CS](https://huggingface.co/datasets/ctu-aic/anli_cs) - ANLI translated to Czech
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
The dataset is intended for simillar usage as the original FLAN dataset. Its main purpose is instruction tuning (supervised fine tuning) of decoder LLMs on NLI task.
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
For direct usage there is column conversations, which can be directly used for training using Transformers and Transformers-related libraries.
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
This collection is not directly intended for training decoder models, however, it can be transformed for this purposes as well.
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
The datasets consists of two language subsets - cs (Czech), en (English)
Each of the subsets contains following columns:
- **id** (str) - identifier, unique only in the dataset of origin
- **premise** (str) - premise (NLI), evidence (fact-checking)
- **hypothesis** (str) - hypothesis (NLI), claim (fact-checking)
- **answer** (str) - correct answer to the NLI/fact-checking question
- **dataset** (str) - the original datasets, which is source of the data
- **translated** (bool) - true if the data point is translated from other language
- **input** (str) - input created using FLAN-like template using *premise*, *hypothesis* and *answer*
- **output** (str) - expected output created according to the randomly chosed FLAN-like template.
- **conversation** (List[Dict[str, str]]) - Hugging Face Transformers-compatible conversation style format, composed from *input* and *output*, which can be directly used for instruction tuning (LLM instruction template can be directly applied)
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
The creation was motivated by the nonexistence of any simillar collection with FLAN-like instructions for the Czech language.
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
For the original data sources please refer to the original datasets.
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
This collection contains the biases, risks and limitations of the underlying datasets. Moreover, other limitation is that the variety of prompt templates is limited to 10 per dataset.
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset.
## Citation [TBD]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
## Dataset Card Contact
If there is any problem or question, please use the dataset discussion here on Huggingface. |