|
--- |
|
dataset_info: |
|
features: |
|
- name: INSTRUCTION |
|
dtype: string |
|
- name: RESPONSE |
|
dtype: string |
|
- name: SOURCE |
|
dtype: string |
|
- name: METADATA |
|
struct: |
|
- name: link |
|
dtype: string |
|
- name: nsfw |
|
dtype: bool |
|
splits: |
|
- name: train |
|
num_bytes: 11848430 |
|
num_examples: 20000 |
|
download_size: 6222319 |
|
dataset_size: 11848430 |
|
license: mit |
|
language: |
|
- en |
|
--- |
|
# Dataset Card for "oa_tell_a_joke_20000" |
|
|
|
This dataset is based on the SocialGrep/one-million-reddit-jokes dataset, and augmented using KeyBert to be used for the [Open Assistant project](https://github.com/LAION-AI/Open-Assistant). |
|
Addition details of dataset creation are [here](https://github.com/mikegarts/Open-Assistant/blob/OA-261.tell_a_joke_dataset/data/datasets/tell_a_joke/tell_a_joke.ipynb) |
|
|
|
# Data fields: |
|
### INSTRUCTION - The instruction to the assistant |
|
### RESPONSE - The response of the assistant |
|
### SOURCE - source of the data |
|
### METADATA - additional link, such as a link to the source webpage on reddit |
|
|
|
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |