mertcobanov
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -34,4 +34,142 @@ configs:
|
|
34 |
path: data/test-*
|
35 |
- split: dev
|
36 |
path: data/dev-*
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
37 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
34 |
path: data/test-*
|
35 |
- split: dev
|
36 |
path: data/dev-*
|
37 |
+
task_categories:
|
38 |
+
- translation
|
39 |
+
- feature-extraction
|
40 |
+
language:
|
41 |
+
- en
|
42 |
+
- tr
|
43 |
+
tags:
|
44 |
+
- turkish
|
45 |
+
size_categories:
|
46 |
+
- n<1K
|
47 |
---
|
48 |
+
|
49 |
+
# all-nli-triplets-turkish
|
50 |
+
|
51 |
+
This dataset is a bilingual (English and Turkish) version of the [`sentence-transformers/all-nli`](https://huggingface.co/datasets/sentence-transformers/all-nli) dataset. It provides triplets of sentences in both English and their corresponding Turkish translations, making it suitable for training and evaluating multilingual and Turkish-specific Natural Language Understanding (NLU) models.
|
52 |
+
|
53 |
+
Each triplet consists of:
|
54 |
+
|
55 |
+
- An **anchor** sentence.
|
56 |
+
- A **positive** sentence (semantically similar to the anchor).
|
57 |
+
- A **negative** sentence (semantically dissimilar to the anchor).
|
58 |
+
|
59 |
+
The dataset enables tasks such as **Natural Language Inference (NLI)**, **semantic similarity**, and **multilingual sentence embedding**.
|
60 |
+
|
61 |
+
## Languages
|
62 |
+
|
63 |
+
- **English** (Original)
|
64 |
+
- **Turkish** (ISO 639-1: `tr`, Translated)
|
65 |
+
|
66 |
+
## Dataset Structure
|
67 |
+
|
68 |
+
The dataset contains six columns:
|
69 |
+
|
70 |
+
| Column | Description |
|
71 |
+
| --------------------- | ------------------------------------------------------- |
|
72 |
+
| `anchor` | Anchor sentence in English. |
|
73 |
+
| `positive` | Positive sentence in English (semantically similar). |
|
74 |
+
| `negative` | Negative sentence in English (semantically dissimilar). |
|
75 |
+
| `anchor_translated` | Anchor sentence translated to Turkish. |
|
76 |
+
| `positive_translated` | Positive sentence translated to Turkish. |
|
77 |
+
| `negative_translated` | Negative sentence translated to Turkish. |
|
78 |
+
|
79 |
+
The dataset is divided into three splits:
|
80 |
+
|
81 |
+
- **Train**
|
82 |
+
- **Test**
|
83 |
+
- **Dev**
|
84 |
+
|
85 |
+
### Example Row
|
86 |
+
|
87 |
+
```json
|
88 |
+
{
|
89 |
+
"anchor": "Moreover, these excise taxes, like other taxes, are determined through the exercise of the power of the Government to compel payment.",
|
90 |
+
"positive": "Government's ability to force payment is how excise taxes are calculated.",
|
91 |
+
"negative": "Excise taxes are an exception to the general rule and are actually decided on the basis of GDP share.",
|
92 |
+
"anchor_translated": "Ayrıca, bu özel tüketim vergileri, diğer vergiler gibi, hükümetin ödeme zorunluluğunu sağlama yetkisini kullanarak belirlenir.",
|
93 |
+
"positive_translated": "Hükümetin ödeme zorlaması, özel tüketim vergilerinin nasıl hesaplandığını belirler.",
|
94 |
+
"negative_translated": "Özel tüketim vergileri genel kuralın bir istisnasıdır ve aslında GSYİH payına dayalı olarak belirlenir."
|
95 |
+
}
|
96 |
+
```
|
97 |
+
|
98 |
+
## Dataset Creation
|
99 |
+
|
100 |
+
### Source
|
101 |
+
|
102 |
+
This dataset is based on the [sentence-transformers/all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) dataset. The English triplets were directly translated into Turkish to provide a bilingual resource.
|
103 |
+
|
104 |
+
### Translation Process
|
105 |
+
|
106 |
+
- Sentences were translated using a state-of-the-art machine translation model.
|
107 |
+
- Quality checks were performed to ensure semantic consistency between the English and Turkish triplets.
|
108 |
+
|
109 |
+
### Motivation
|
110 |
+
|
111 |
+
This bilingual dataset was created to address the lack of Turkish resources in Natural Language Processing (NLP). It aims to support tasks such as multilingual sentence embedding, semantic similarity, and Turkish NLU.
|
112 |
+
|
113 |
+
## Supported Tasks and Benchmarks
|
114 |
+
|
115 |
+
### Primary Tasks
|
116 |
+
|
117 |
+
- **Natural Language Inference (NLI)**: Train models to understand sentence relationships.
|
118 |
+
- **Semantic Similarity**: Train and evaluate models on semantic similarity across languages.
|
119 |
+
- **Multilingual Sentence Embedding**: Create models that understand multiple languages.
|
120 |
+
|
121 |
+
## **Dataset Details**
|
122 |
+
|
123 |
+
| Split | Size (Triplets) | Description |
|
124 |
+
| ----- | --------------- | ----------------------- |
|
125 |
+
| Train | 558k | Triplets for training |
|
126 |
+
| Test | 6.61k | Triplets for testing |
|
127 |
+
| Dev | 6.58k | Triplets for validation |
|
128 |
+
|
129 |
+
## How to Use
|
130 |
+
|
131 |
+
Here’s an example of how to load and explore the dataset:
|
132 |
+
|
133 |
+
```python
|
134 |
+
from datasets import load_dataset
|
135 |
+
|
136 |
+
# Load dataset
|
137 |
+
dataset = load_dataset("mertcobanov/all-nli-triplets-turkish")
|
138 |
+
|
139 |
+
# Access the train split
|
140 |
+
train_data = dataset["train"]
|
141 |
+
|
142 |
+
# Example row
|
143 |
+
print(train_data[0])
|
144 |
+
```
|
145 |
+
|
146 |
+
## Caveats and Recommendations
|
147 |
+
|
148 |
+
1. The translations were generated using machine translation, and while quality checks were performed, there may still be minor inaccuracies.
|
149 |
+
2. For multilingual tasks, ensure the alignment between English and Turkish triplets is preserved during preprocessing.
|
150 |
+
|
151 |
+
## Citation
|
152 |
+
|
153 |
+
If you use this dataset in your research, please cite the original dataset and this translation effort:
|
154 |
+
|
155 |
+
```
|
156 |
+
@inproceedings{sentence-transformers,
|
157 |
+
title={SentenceTransformers: Multilingual Sentence Embeddings},
|
158 |
+
author={Reimers, Nils and Gurevych, Iryna},
|
159 |
+
year={2019},
|
160 |
+
url={https://huggingface.co/datasets/sentence-transformers/all-nli-triplet}
|
161 |
+
}
|
162 |
+
|
163 |
+
@misc{mertcobanov_2024,
|
164 |
+
author = {Mert Cobanov},
|
165 |
+
title = {Turkish-English Bilingual Dataset for NLI Triplets},
|
166 |
+
year = {2024},
|
167 |
+
url = {https://huggingface.co/datasets/mertcobanov/all-nli-triplets-turkish}
|
168 |
+
}
|
169 |
+
```
|
170 |
+
|
171 |
+
## License
|
172 |
+
|
173 |
+
This dataset follows the licensing terms of the original `sentence-transformers/all-nli` dataset. Ensure compliance with these terms if you use this dataset.
|
174 |
+
|
175 |
+
This card highlights both the bilingual nature and the structure of your dataset while making it easy for others to understand its purpose and use cases. Let me know if you'd like further refinements!
|