Datasets:
Tasks:
Text Classification
Modalities:
Text
Formats:
parquet
Languages:
English
Size:
10K - 100K
License:
Update README.md
Browse files
README.md
CHANGED
@@ -1,4 +1,23 @@
|
|
1 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2 |
dataset_info:
|
3 |
features:
|
4 |
- name: code_snippet
|
@@ -17,6 +36,75 @@ configs:
|
|
17 |
- split: train
|
18 |
path: data/train-*
|
19 |
---
|
20 |
-
# Dataset Card for "krod"
|
21 |
|
22 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
language:
|
3 |
+
- en
|
4 |
+
license: unknown
|
5 |
+
size_categories:
|
6 |
+
- 10K<n<100K
|
7 |
+
task_categories:
|
8 |
+
- text-classification
|
9 |
+
pretty_name: Java Code Readability Krod
|
10 |
+
tags:
|
11 |
+
- readability
|
12 |
+
- code
|
13 |
+
- source code
|
14 |
+
- code readability
|
15 |
+
- Java
|
16 |
+
features:
|
17 |
+
- name: code_snippet
|
18 |
+
dtype: string
|
19 |
+
- name: score
|
20 |
+
dtype: float
|
21 |
dataset_info:
|
22 |
features:
|
23 |
- name: code_snippet
|
|
|
36 |
- split: train
|
37 |
path: data/train-*
|
38 |
---
|
|
|
39 |
|
40 |
+
# Java Code Readability Krod
|
41 |
+
|
42 |
+
This dataset contains **63460 Java code snippets** along with a **readability score**, mined from [Github](https://github.com/) and automatically processed and labelled.
|
43 |
+
|
44 |
+
You can download the dataset using Hugging Face:
|
45 |
+
|
46 |
+
```python
|
47 |
+
from datasets import load_dataset
|
48 |
+
ds = load_dataset("LuKrO/krod")
|
49 |
+
```
|
50 |
+
|
51 |
+
The snippets are **not** split into train and test (and validation) set. Thus, the whole dataset is in the **train** set:
|
52 |
+
```python
|
53 |
+
ds = ds['train']
|
54 |
+
ds_as_list = ds.to_list() # Convert the dataset to whatever format suits you best
|
55 |
+
|
56 |
+
```
|
57 |
+
|
58 |
+
The dataset is structured as follows:
|
59 |
+
|
60 |
+
```json
|
61 |
+
{
|
62 |
+
"code_snippet": ..., # Java source code snippet
|
63 |
+
"score": ... # Readability score
|
64 |
+
}
|
65 |
+
```
|
66 |
+
|
67 |
+
The main goal of this repository is to train code **readability classifiers for Java source code**.
|
68 |
+
|
69 |
+
## Dataset Details
|
70 |
+
|
71 |
+
### Dataset Description
|
72 |
+
|
73 |
+
- **Curated by:** Krodinger Lukas
|
74 |
+
- **Shared by:** Krodinger Lukas
|
75 |
+
- **Language(s) (NLP):** Java
|
76 |
+
- **License:** Unknown
|
77 |
+
|
78 |
+
## Uses
|
79 |
+
|
80 |
+
The dataset can be used for training Java code readability classifiers.
|
81 |
+
|
82 |
+
## Dataset Structure
|
83 |
+
|
84 |
+
Each entry of the dataset consists of a **code_snippet** and a **score**.
|
85 |
+
The code_snippet (string) is the code snippet that was downloaded from GitHub. Each snippet has a readability score assigned.
|
86 |
+
The score is based on a five point Likert scale, with 1 being very unreadable and 5 being very readable.
|
87 |
+
|
88 |
+
## Dataset Creation
|
89 |
+
|
90 |
+
### Curation Rationale
|
91 |
+
|
92 |
+
To advance code readability classification, the creation of datasets in this research field is of high importance.
|
93 |
+
We provide a new dataset generated with a new approach.
|
94 |
+
Previous datasets for code readability classification are mostly generated by humans manually annotating the readability of code.
|
95 |
+
Those datasets are relatively small, with combined only 421 samples.
|
96 |
+
As our approach allows automation, we can provide a different scale of code snippets.
|
97 |
+
We share this dataset on Hugging Face to share access and make the ease of usage easy.
|
98 |
+
|
99 |
+
### Source Data
|
100 |
+
|
101 |
+
The initial source of code snippets are from various public GitHub repositories:
|
102 |
+
TODO: Add repos
|
103 |
+
|
104 |
+
|
105 |
+
#### Data Collection and Processing
|
106 |
+
|
107 |
+
The Data Collection and Preprocessing for this Hugging Face dataset involved two main steps.
|
108 |
+
First, GitHub repositories known for high code quality were downloaded and labeled as highly readable. The extracted methods are labeled with a score of 4.5.
|
109 |
+
Second, the code was intentionally manipulated to reduce readability. The resulting code was labelled with a score of 1.5.
|
110 |
+
This resulted in an automatically generated training dataset for source code readability classification.
|