dataset_info:
features:
- name: anchor
dtype: string
- name: positive
dtype: string
- name: negative
dtype: string
- name: anchor_translated
dtype: string
- name: positive_translated
dtype: string
- name: negative_translated
dtype: string
splits:
- name: train
num_bytes: 92668231
num_examples: 277386
- name: test
num_bytes: 2815453
num_examples: 6609
- name: dev
num_bytes: 2669220
num_examples: 6584
download_size: 26042530
dataset_size: 98152904
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: dev
path: data/dev-*
task_categories:
- translation
- feature-extraction
language:
- en
- tr
tags:
- turkish
size_categories:
- n<1K
all-nli-triplets-turkish
This dataset is a bilingual (English and Turkish) version of the sentence-transformers/all-nli
dataset. It provides triplets of sentences in both English and their corresponding Turkish translations, making it suitable for training and evaluating multilingual and Turkish-specific Natural Language Understanding (NLU) models.
Each triplet consists of:
- An anchor sentence.
- A positive sentence (semantically similar to the anchor).
- A negative sentence (semantically dissimilar to the anchor).
The dataset enables tasks such as Natural Language Inference (NLI), semantic similarity, and multilingual sentence embedding.
Languages
- English (Original)
- Turkish (ISO 639-1:
tr
, Translated)
Dataset Structure
The dataset contains six columns:
Column | Description |
---|---|
anchor |
Anchor sentence in English. |
positive |
Positive sentence in English (semantically similar). |
negative |
Negative sentence in English (semantically dissimilar). |
anchor_translated |
Anchor sentence translated to Turkish. |
positive_translated |
Positive sentence translated to Turkish. |
negative_translated |
Negative sentence translated to Turkish. |
The dataset is divided into three splits:
- Train
- Test
- Dev
Example Row
{
"anchor": "Moreover, these excise taxes, like other taxes, are determined through the exercise of the power of the Government to compel payment.",
"positive": "Government's ability to force payment is how excise taxes are calculated.",
"negative": "Excise taxes are an exception to the general rule and are actually decided on the basis of GDP share.",
"anchor_translated": "Ayrıca, bu özel tüketim vergileri, diğer vergiler gibi, hükümetin ödeme zorunluluğunu sağlama yetkisini kullanarak belirlenir.",
"positive_translated": "Hükümetin ödeme zorlaması, özel tüketim vergilerinin nasıl hesaplandığını belirler.",
"negative_translated": "Özel tüketim vergileri genel kuralın bir istisnasıdır ve aslında GSYİH payına dayalı olarak belirlenir."
}
Dataset Creation
Source
This dataset is based on the sentence-transformers/all-nli dataset. The English triplets were directly translated into Turkish to provide a bilingual resource.
Translation Process
- Sentences were translated using a state-of-the-art machine translation model.
- Quality checks were performed to ensure semantic consistency between the English and Turkish triplets.
Motivation
This bilingual dataset was created to address the lack of Turkish resources in Natural Language Processing (NLP). It aims to support tasks such as multilingual sentence embedding, semantic similarity, and Turkish NLU.
Supported Tasks and Benchmarks
Primary Tasks
- Natural Language Inference (NLI): Train models to understand sentence relationships.
- Semantic Similarity: Train and evaluate models on semantic similarity across languages.
- Multilingual Sentence Embedding: Create models that understand multiple languages.
Dataset Details
Split | Size (Triplets) | Description |
---|---|---|
Train | 558k | Triplets for training |
Test | 6.61k | Triplets for testing |
Dev | 6.58k | Triplets for validation |
How to Use
Here’s an example of how to load and explore the dataset:
from datasets import load_dataset
# Load dataset
dataset = load_dataset("mertcobanov/all-nli-triplets-turkish")
# Access the train split
train_data = dataset["train"]
# Example row
print(train_data[0])
Caveats and Recommendations
- The translations were generated using machine translation, and while quality checks were performed, there may still be minor inaccuracies.
- For multilingual tasks, ensure the alignment between English and Turkish triplets is preserved during preprocessing.
Citation
If you use this dataset in your research, please cite the original dataset and this translation effort:
@inproceedings{sentence-transformers,
title={SentenceTransformers: Multilingual Sentence Embeddings},
author={Reimers, Nils and Gurevych, Iryna},
year={2019},
url={https://huggingface.co/datasets/sentence-transformers/all-nli-triplet}
}
@misc{mertcobanov_2024,
author = {Mert Cobanov},
title = {Turkish-English Bilingual Dataset for NLI Triplets},
year = {2024},
url = {https://huggingface.co/datasets/mertcobanov/all-nli-triplets-turkish}
}
License
This dataset follows the licensing terms of the original sentence-transformers/all-nli
dataset. Ensure compliance with these terms if you use this dataset.
This card highlights both the bilingual nature and the structure of your dataset while making it easy for others to understand its purpose and use cases. Let me know if you'd like further refinements!