--- configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* dataset_info: features: - name: audio struct: - name: array sequence: sequence: float32 - name: path dtype: string - name: sampling_rate dtype: int64 - name: sentence dtype: string splits: - name: train num_bytes: 3128740048 num_examples: 5328 - name: test num_bytes: 776455056 num_examples: 1333 download_size: 3882364624 dataset_size: 3905195104 license: apache-2.0 task_categories: - automatic-speech-recognition language: - en tags: - medical size_categories: - 1K8.5 hours of audio utterances paired with text for common medical symptoms. **Content** >This data contains thousands of audio utterances for common medical symptoms like “knee pain” or “headache,” totaling more than 8 hours in aggregate. Each utterance was created by individual human contributors based on a given symptom. These audio snippets can be used to train conversational agents in the medical field. > >This Figure Eight dataset was created via a multi-job workflow. The first involved contributors writing text phrases to describe symptoms given. For example, for “headache,” a contributor might write “I need help with my migraines.” Subsequent jobs captured audio utterances for accepted text strings. > >Note that some of the labels are incorrect and some of the audio files have poor quality. I would recommend cleaning the dataset before training any machine learning models. > >This dataset contains both the audio utterances and corresponding transcriptions. **What's new** *The data is clean from all columns except for the file_path and phrase. *All Audios are loaded into the DatasetDict as an 1D array, float32 *All Audios are resampled into 16K