Datasets:
metadata
language:
- en
size_categories:
- 10K<n<100K
task_categories:
- conversational
pretty_name: Doctor & Patient
dataset_info:
features:
- name: prompt
dtype: string
- name: input_ids
sequence: int32
- name: length
dtype: int64
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 42127351.778204426
num_examples: 13125
- name: test
num_bytes: 10534245.221795576
num_examples: 3282
download_size: 10917910
dataset_size: 52661597
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
tags:
- biology
- medical
Dataset
This is an edited and tokenized version of the MedQuad-MedicalQnADataset dataset by keivalya. The original dataset contains 16K+ questions and answers between patient and doctor, which have been converted into a full prompt to train BioGPT by Microsoft.
Tokenizer used
microsoft/BioGPT-Large (BPE tokenizer)
Full prompt
prompt = f"""You are a helpful AI Doctor who answers medical questions. Below is a question from a patient. Your task is to answer the questions as truthfully as you can.
### Patient:
{sample['Question']}
### Doctor:
{sample['Answer']}"""
Notes
Since bioGPT has a max input of 1024, the full prompt was truncated to stay below this limit. The truncation strategy I used made sure that only full sentences were produced.
Please note that this dataset is for research/testing only, it should not be used in a real setting or used to give medical advice to people.