nli_it_collection / README.md
mlynatom's picture
Update README.md
0024d43 verified
---
dataset_info:
features:
- name: id
dtype: string
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: answer
dtype: string
- name: dataset
dtype: string
- name: translated
dtype: bool
- name: input
dtype: string
- name: output
dtype: string
- name: conversations
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 864378584
num_examples: 923646
- name: validation
num_bytes: 25124637
num_examples: 23541
- name: test
num_bytes: 23873116
num_examples: 23023
download_size: 285523301
dataset_size: 913376337
configs:
- config_name: en
data_files:
- split: train
path: en/train-*
- split: validation
path: en/validation-*
- split: test
path: en/test-*
- config_name: cs
data_files:
- split: train
path: cs/train-*
- split: validation
path: cs/validation-*
- split: test
path: cs/test-*
license: cc-by-nc-sa-4.0
task_categories:
- text-generation
- text-classification
language:
- cs
- en
tags:
- NLI
size_categories:
- 1M<n<10M
source_datasets:
- ctu-aic/enfever_nli
- facebook/anli
- stanfordnlp/snli
- chenxwh/AVeriTeC
- ctu-aic/anli_cs
- ctu-aic/snli_cs
- ctu-aic/csfever_nli
- ctu-aic/ctkfacts_nli
multilinguality:
- multilingual
---
# Dataset Card for Natural Language Inference Instruction Tuning Collection
<!-- Provide a quick summary of the dataset. -->
This dataset is a collection of various NLI datasets in Czech and English, transformed into an instruction tuning format based on the FLAN approach.
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
This dataset is a collection of English and Czech NLI datasets. Its primary purpose is instruction tuning (supervised fine tuning) of decoder LLMs. The used datasets were converted using a FLAN-like templates.
- **Curated by:** Artificial Intelligence Center, FEE, CTU in Prague
- **Language(s) (NLP):** Czech (cs, ces), English (en)
- **License:** [cc-by-nc-sa-4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/deed)
### Dataset Sources
<!-- Provide the basic links for the dataset. -->
The dataset consists of the following datasets:
**English** 🇺🇸 🇬🇧
- [FEVER](https://huggingface.co/datasets/ctu-aic/enfever_nli) - FEVER transformed for NLI
- [AVeriTeC](https://huggingface.co/chenxwh/AVeriTeC) - train and development gold splits with concatenated question-answer pairs as the evidence
- [SNLI](https://huggingface.co/datasets/stanfordnlp/snli)
- [ANLI](https://huggingface.co/datasets/facebook/anli)
**Czech** 🇨🇿
- [CsFEVER-NLI](https://huggingface.co/datasets/ctu-aic/csfever_nli) - FEVER translated to Czech using Deepl translator
- [CtkFACTS-NLI](https://huggingface.co/datasets/ctu-aic/ctkfacts_nli) - Original Czech NLI dataset
- [SNLI_CS](https://huggingface.co/datasets/ctu-aic/snli_cs) - SNLI translated to Czech using Google translator
- [ANLI_CS](https://huggingface.co/datasets/ctu-aic/anli_cs) - ANLI translated to Czech
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
The dataset is intended for simillar usage as the original FLAN dataset. Its main purpose is instruction tuning (supervised fine tuning) of decoder LLMs on NLI task.
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
For direct usage there is column conversations, which can be directly used for training using Transformers and Transformers-related libraries.
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
This collection is not directly intended for training decoder models, however, it can be transformed for this purposes as well.
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
The datasets consists of two language subsets - cs (Czech), en (English)
Each of the subsets contains following columns:
- **id** (str) - identifier, unique only in the dataset of origin
- **premise** (str) - premise (NLI), evidence (fact-checking)
- **hypothesis** (str) - hypothesis (NLI), claim (fact-checking)
- **answer** (str) - correct answer to the NLI/fact-checking question
- **dataset** (str) - the original datasets, which is source of the data
- **translated** (bool) - true if the data point is translated from other language
- **input** (str) - input created using FLAN-like template using *premise*, *hypothesis* and *answer*
- **output** (str) - expected output created according to the randomly chosed FLAN-like template.
- **conversation** (List[Dict[str, str]]) - Hugging Face Transformers-compatible conversation style format, composed from *input* and *output*, which can be directly used for instruction tuning (LLM instruction template can be directly applied)
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
The creation was motivated by the nonexistence of any simillar collection with FLAN-like instructions for the Czech language.
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
For the original data sources please refer to the original datasets.
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
This collection contains the biases, risks and limitations of the underlying datasets. Moreover, other limitation is that the variety of prompt templates is limited to 10 per dataset.
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset.
## Citation [TBD]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
## Dataset Card Contact
If there is any problem or question, please use the dataset discussion here on Huggingface.