|
--- |
|
license: apache-2.0 |
|
language: |
|
- tr |
|
task_categories: |
|
- question-answering |
|
- text-classification |
|
- text-generation |
|
- text-retrieval |
|
tags: |
|
- medical |
|
- text |
|
size_categories: |
|
- n<1K |
|
--- |
|
# Dataset Card for MedData_tr-1 |
|
|
|
This dataset has 917 instances and 5227389 tokens in total |
|
|
|
## Dataset Details |
|
|
|
### Dataset Description |
|
|
|
<!-- Provide a longer summary of what this dataset is. --> |
|
|
|
- **Language(s) (NLP):** Turkish |
|
- **License:** APACHE 2.0 |
|
|
|
### Dataset Sources |
|
|
|
Memorial Health Library : https://www.memorial.com.tr/saglik-kutuphanesi |
|
|
|
|
|
## Uses |
|
|
|
<!-- Address questions around how the dataset is intended to be used. --> |
|
|
|
### Direct Use |
|
|
|
<!-- This section describes suitable use cases for the dataset. --> |
|
|
|
|
|
## Dataset Structure |
|
|
|
**category** : The library was split into 4 categories |
|
- Tanı ve Testler (Diagnoses and Tests) |
|
- Hastalıklar (Diseases) |
|
- Tedavi Yöntemleri (Treatment Methods) |
|
|
|
**topic** : The topic of the text content |
|
|
|
**text** : Full text |
|
|
|
**num_tokens** : Token count of the full text |
|
|
|
## Dataset Creation |
|
|
|
### Curation Rationale |
|
<!-- Motivation for the creation of this dataset. --> |
|
This dataset was created to increase the Turkish medical text data in HuggingFace Datasets library. |
|
|
|
|
|
### Source Data |
|
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> |
|
Memorial is a hospital network based in Turkey. Their website provides a health library, which the contents were written by doctors who are experts in their fields. |
|
|
|
|
|
#### Data Collection and Processing |
|
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> |
|
The contents were scraped using Python's BeautifulSoup library. |
|
|
|
### Annotations |
|
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> |
|
Each text in the dataset was tokenized and counted afterwards. |
|
|
|
#### Annotation process |
|
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> |
|
Tokenization was done using Tiktoken's encoding `cl100k_base`, used by `gpt-4-turbo`, `gpt-4`, `gpt-3.5-turbo`, etc. |
|
|
|
#### Personal and Sensitive Information |
|
This data does not contain ant personal, sensitive or private information. |
|
|
|
|
|
## Bias, Risks, and Limitations |
|
|
|
<!-- This section is meant to convey both technical and sociotechnical limitations. --> |
|
|
|
[More Information Needed] |
|
|
|
### Recommendations |
|
|
|
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> |
|
|
|
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. |
|
|
|
## Citation [optional] |
|
|
|
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> |
|
|
|
**BibTeX:** |
|
|
|
[More Information Needed] |
|
|
|
**APA:** |
|
|
|
[More Information Needed] |
|
|
|
|
|
## Dataset Card Authors |
|
|
|
Zeynep Cahan |
|
|
|
## Dataset Card Contact |
|
|
|
[email protected] |