Datasets:
Tasks:
Token Classification
Modalities:
Text
Formats:
parquet
Sub-tasks:
named-entity-recognition
Languages:
English
Size:
100K - 1M
dataset_info: | |
features: | |
- name: text | |
dtype: string | |
- name: tokens | |
sequence: string | |
- name: prediction | |
list: | |
- name: end | |
dtype: int64 | |
- name: label | |
dtype: string | |
- name: score | |
dtype: float64 | |
- name: start | |
dtype: int64 | |
- name: prediction_agent | |
dtype: string | |
- name: annotation | |
dtype: 'null' | |
- name: annotation_agent | |
dtype: 'null' | |
- name: id | |
dtype: 'null' | |
- name: metadata | |
struct: | |
- name: medical_specialty | |
dtype: string | |
- name: status | |
dtype: string | |
- name: event_timestamp | |
dtype: timestamp[us] | |
- name: metrics | |
dtype: 'null' | |
splits: | |
- name: train | |
num_bytes: 58986555 | |
num_examples: 148699 | |
download_size: 17498377 | |
dataset_size: 58986555 | |
# Dataset Card for "medical-keywords" | |
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |