Datasets:
File size: 1,159 Bytes
e8b0faa a28abf5 cdee639 e8b0faa d60adfe 02e4004 8ba2ffd 02e4004 e8b0faa |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 |
---
dataset_info:
features:
- name: INSTRUCTION
dtype: string
- name: RESPONSE
dtype: string
- name: SOURCE
dtype: string
- name: METADATA
struct:
- name: link
dtype: string
- name: nsfw
dtype: bool
splits:
- name: train
num_bytes: 11848430
num_examples: 20000
download_size: 6222319
dataset_size: 11848430
license: mit
language:
- en
---
# Dataset Card for "oa_tell_a_joke_20000"
This dataset is based on the SocialGrep/one-million-reddit-jokes dataset, and augmented using KeyBert to be used for the [Open Assistant project](https://github.com/LAION-AI/Open-Assistant).
Addition details of dataset creation are [here](https://github.com/mikegarts/Open-Assistant/blob/OA-261.tell_a_joke_dataset/data/datasets/tell_a_joke/tell_a_joke.ipynb)
# Data fields:
### INSTRUCTION - The instruction to the assistant
### RESPONSE - The response of the assistant
### SOURCE - source of the data
### METADATA - additional link, such as a link to the source webpage on reddit
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |