Datasets:
Tasks:
Token Classification
Modalities:
Text
Formats:
parquet
Languages:
Thai
Size:
100K - 1M
Tags:
word-tokenization
License:
Update files from the datasets library (from 1.18.0)
Browse filesRelease notes: https://github.com/huggingface/datasets/releases/tag/1.18.0
README.md
CHANGED
@@ -18,6 +18,7 @@ task_categories:
|
|
18 |
task_ids:
|
19 |
- structure-prediction-other-word-tokenization
|
20 |
paperswithcode_id: null
|
|
|
21 |
---
|
22 |
|
23 |
# Dataset Card for `best2009`
|
@@ -186,4 +187,4 @@ Character type features:
|
|
186 |
|
187 |
### Contributions
|
188 |
|
189 |
-
Thanks to [@cstorm125](https://github.com/cstorm125) for adding this dataset.
|
|
|
18 |
task_ids:
|
19 |
- structure-prediction-other-word-tokenization
|
20 |
paperswithcode_id: null
|
21 |
+
pretty_name: best2009
|
22 |
---
|
23 |
|
24 |
# Dataset Card for `best2009`
|
|
|
187 |
|
188 |
### Contributions
|
189 |
|
190 |
+
Thanks to [@cstorm125](https://github.com/cstorm125) for adding this dataset.
|