Datasets:
Tasks:
Text Generation
Formats:
parquet
Sub-tasks:
language-modeling
Languages:
Danish
Size:
1M - 10M
License:
KennethEnevoldsen
commited on
docs: added minimal contribution guidelines
Browse files- CONTRIBUTING.md +20 -0
CONTRIBUTING.md
ADDED
@@ -0,0 +1,20 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
## Working with dataset locally
|
2 |
+
|
3 |
+
A huggingface datasets repository is a GitHub repository like any other. You can simply download it like so:
|
4 |
+
|
5 |
+
```bash
|
6 |
+
git clone https://huggingface.co/datasets/danish-foundation-models/danish-gigaword-2
|
7 |
+
cd danish-gigaword-2
|
8 |
+
```
|
9 |
+
|
10 |
+
You can the work with the dataset locally like so:
|
11 |
+
|
12 |
+
```py
|
13 |
+
from datasets import load_dataset
|
14 |
+
|
15 |
+
name = "../." # instead of "danish-foundation-models/danish-gigaword-2"
|
16 |
+
dataset = load_dataset("../.", split="train")
|
17 |
+
# make transformations here
|
18 |
+
```
|
19 |
+
|
20 |
+
> Note: While it is local Huggingface still uses a cache, therefore you might need to reset it after changes have been made to see that it works correctly. You can do this by deleting the cached files which you can locate using `dataset.cache_files`.
|