File size: 6,137 Bytes
1a77bea
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
bff203b
98d4efc
 
c0cb6df
 
 
 
 
 
98d4efc
 
1a77bea
 
 
bff203b
 
c0cb6df
 
 
 
ec94bcd
 
 
 
 
 
 
 
 
 
1a77bea
ec94bcd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
---
dataset_info:
  features:
  - name: anchor
    dtype: string
  - name: positive
    dtype: string
  - name: negative
    dtype: string
  - name: anchor_translated
    dtype: string
  - name: positive_translated
    dtype: string
  - name: negative_translated
    dtype: string
  splits:
  - name: train
    num_bytes: 92668231
    num_examples: 277386
  - name: test
    num_bytes: 2815453
    num_examples: 6609
  - name: dev
    num_bytes: 2669220
    num_examples: 6584
  download_size: 26042530
  dataset_size: 98152904
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: test
    path: data/test-*
  - split: dev
    path: data/dev-*
task_categories:
- translation
- feature-extraction
language:
- en
- tr
tags:
- turkish
size_categories:
- n<1K
---

# all-nli-triplets-turkish

This dataset is a bilingual (English and Turkish) version of the [`sentence-transformers/all-nli`](https://huggingface.co/datasets/sentence-transformers/all-nli) dataset. It provides triplets of sentences in both English and their corresponding Turkish translations, making it suitable for training and evaluating multilingual and Turkish-specific Natural Language Understanding (NLU) models.

Each triplet consists of:

- An **anchor** sentence.
- A **positive** sentence (semantically similar to the anchor).
- A **negative** sentence (semantically dissimilar to the anchor).

The dataset enables tasks such as **Natural Language Inference (NLI)**, **semantic similarity**, and **multilingual sentence embedding**.

## Languages

- **English** (Original)
- **Turkish** (ISO 639-1: `tr`, Translated)

## Dataset Structure

The dataset contains six columns:

| Column                | Description                                             |
| --------------------- | ------------------------------------------------------- |
| `anchor`              | Anchor sentence in English.                             |
| `positive`            | Positive sentence in English (semantically similar).    |
| `negative`            | Negative sentence in English (semantically dissimilar). |
| `anchor_translated`   | Anchor sentence translated to Turkish.                  |
| `positive_translated` | Positive sentence translated to Turkish.                |
| `negative_translated` | Negative sentence translated to Turkish.                |

The dataset is divided into three splits:

- **Train**
- **Test**
- **Dev**

### Example Row

```json
{
  "anchor": "Moreover, these excise taxes, like other taxes, are determined through the exercise of the power of the Government to compel payment.",
  "positive": "Government's ability to force payment is how excise taxes are calculated.",
  "negative": "Excise taxes are an exception to the general rule and are actually decided on the basis of GDP share.",
  "anchor_translated": "Ayrıca, bu özel tüketim vergileri, diğer vergiler gibi, hükümetin ödeme zorunluluğunu sağlama yetkisini kullanarak belirlenir.",
  "positive_translated": "Hükümetin ödeme zorlaması, özel tüketim vergilerinin nasıl hesaplandığını belirler.",
  "negative_translated": "Özel tüketim vergileri genel kuralın bir istisnasıdır ve aslında GSYİH payına dayalı olarak belirlenir."
}
```

## Dataset Creation

### Source

This dataset is based on the [sentence-transformers/all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) dataset. The English triplets were directly translated into Turkish to provide a bilingual resource.

### Translation Process

- Sentences were translated using a state-of-the-art machine translation model.
- Quality checks were performed to ensure semantic consistency between the English and Turkish triplets.

### Motivation

This bilingual dataset was created to address the lack of Turkish resources in Natural Language Processing (NLP). It aims to support tasks such as multilingual sentence embedding, semantic similarity, and Turkish NLU.

## Supported Tasks and Benchmarks

### Primary Tasks

- **Natural Language Inference (NLI)**: Train models to understand sentence relationships.
- **Semantic Similarity**: Train and evaluate models on semantic similarity across languages.
- **Multilingual Sentence Embedding**: Create models that understand multiple languages.

## **Dataset Details**

| Split | Size (Triplets) | Description             |
| ----- | --------------- | ----------------------- |
| Train | 558k            | Triplets for training   |
| Test  | 6.61k           | Triplets for testing    |
| Dev   | 6.58k           | Triplets for validation |

## How to Use

Here’s an example of how to load and explore the dataset:

```python
from datasets import load_dataset

# Load dataset
dataset = load_dataset("mertcobanov/all-nli-triplets-turkish")

# Access the train split
train_data = dataset["train"]

# Example row
print(train_data[0])
```

## Caveats and Recommendations

1. The translations were generated using machine translation, and while quality checks were performed, there may still be minor inaccuracies.
2. For multilingual tasks, ensure the alignment between English and Turkish triplets is preserved during preprocessing.

## Citation

If you use this dataset in your research, please cite the original dataset and this translation effort:

```
@inproceedings{sentence-transformers,
  title={SentenceTransformers: Multilingual Sentence Embeddings},
  author={Reimers, Nils and Gurevych, Iryna},
  year={2019},
  url={https://huggingface.co/datasets/sentence-transformers/all-nli-triplet}
}

@misc{mertcobanov_2024,
  author = {Mert Cobanov},
  title = {Turkish-English Bilingual Dataset for NLI Triplets},
  year = {2024},
  url = {https://huggingface.co/datasets/mertcobanov/all-nli-triplets-turkish}
}
```

## License

This dataset follows the licensing terms of the original `sentence-transformers/all-nli` dataset. Ensure compliance with these terms if you use this dataset.

This card highlights both the bilingual nature and the structure of your dataset while making it easy for others to understand its purpose and use cases. Let me know if you'd like further refinements!