parquet-converter commited on
Commit
2764a71
·
1 Parent(s): 381dba4

Update parquet files

Browse files
.gitattributes DELETED
@@ -1,27 +0,0 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bin.* filter=lfs diff=lfs merge=lfs -text
5
- *.bz2 filter=lfs diff=lfs merge=lfs -text
6
- *.ftz filter=lfs diff=lfs merge=lfs -text
7
- *.gz filter=lfs diff=lfs merge=lfs -text
8
- *.h5 filter=lfs diff=lfs merge=lfs -text
9
- *.joblib filter=lfs diff=lfs merge=lfs -text
10
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
- *.model filter=lfs diff=lfs merge=lfs -text
12
- *.msgpack filter=lfs diff=lfs merge=lfs -text
13
- *.onnx filter=lfs diff=lfs merge=lfs -text
14
- *.ot filter=lfs diff=lfs merge=lfs -text
15
- *.parquet filter=lfs diff=lfs merge=lfs -text
16
- *.pb filter=lfs diff=lfs merge=lfs -text
17
- *.pt filter=lfs diff=lfs merge=lfs -text
18
- *.pth filter=lfs diff=lfs merge=lfs -text
19
- *.rar filter=lfs diff=lfs merge=lfs -text
20
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
- *.tar.* filter=lfs diff=lfs merge=lfs -text
22
- *.tflite filter=lfs diff=lfs merge=lfs -text
23
- *.tgz filter=lfs diff=lfs merge=lfs -text
24
- *.xz filter=lfs diff=lfs merge=lfs -text
25
- *.zip filter=lfs diff=lfs merge=lfs -text
26
- *.zstandard filter=lfs diff=lfs merge=lfs -text
27
- *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md DELETED
@@ -1,223 +0,0 @@
1
- ---
2
- task_categories:
3
- - text-classification
4
- multilinguality:
5
- - multilingual
6
- task_ids: []
7
- language:
8
- - awa
9
- - bho
10
- - bra
11
- - hi
12
- - mag
13
- language_creators:
14
- - found
15
- annotations_creators:
16
- - no-annotation
17
- source_datasets:
18
- - original
19
- size_categories:
20
- - 10K<n<100K
21
- license:
22
- - cc-by-4.0
23
- paperswithcode_id: null
24
- pretty_name: ilist
25
- tags:
26
- - language-identification
27
- dataset_info:
28
- features:
29
- - name: language_id
30
- dtype:
31
- class_label:
32
- names:
33
- 0: AWA
34
- 1: BRA
35
- 2: MAG
36
- 3: BHO
37
- 4: HIN
38
- - name: text
39
- dtype: string
40
- splits:
41
- - name: train
42
- num_bytes: 14362998
43
- num_examples: 70351
44
- - name: test
45
- num_bytes: 2146857
46
- num_examples: 9692
47
- - name: validation
48
- num_bytes: 2407643
49
- num_examples: 10329
50
- download_size: 18284850
51
- dataset_size: 18917498
52
- ---
53
-
54
- # Dataset Card for ilist
55
-
56
- ## Table of Contents
57
- - [Dataset Description](#dataset-description)
58
- - [Dataset Summary](#dataset-summary)
59
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
60
- - [Languages](#languages)
61
- - [Dataset Structure](#dataset-structure)
62
- - [Data Instances](#data-instances)
63
- - [Data Fields](#data-fields)
64
- - [Data Splits](#data-splits)
65
- - [Dataset Creation](#dataset-creation)
66
- - [Curation Rationale](#curation-rationale)
67
- - [Source Data](#source-data)
68
- - [Annotations](#annotations)
69
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
70
- - [Considerations for Using the Data](#considerations-for-using-the-data)
71
- - [Social Impact of Dataset](#social-impact-of-dataset)
72
- - [Discussion of Biases](#discussion-of-biases)
73
- - [Other Known Limitations](#other-known-limitations)
74
- - [Additional Information](#additional-information)
75
- - [Dataset Curators](#dataset-curators)
76
- - [Licensing Information](#licensing-information)
77
- - [Citation Information](#citation-information)
78
- - [Contributions](#contributions)
79
-
80
- ## Dataset Description
81
-
82
- - **Homepage:**
83
- - **Repository:** https://github.com/kmi-linguistics/vardial2018
84
- - **Paper:** [Language Identification and Morphosyntactic Tagging: The Second VarDial Evaluation Campaign](https://aclanthology.org/W18-3901/)
85
- - **Leaderboard:**
86
- - **Point of Contact:** [email protected]
87
-
88
- ### Dataset Summary
89
-
90
- This dataset is introduced in a task which aimed at identifying 5 closely-related languages of Indo-Aryan language family: Hindi (also known as Khari Boli), Braj Bhasha, Awadhi, Bhojpuri and Magahi. These languages form part of a continuum starting from Western Uttar Pradesh (Hindi and Braj Bhasha) to Eastern Uttar Pradesh (Awadhi and Bhojpuri) and the neighbouring Eastern state of Bihar (Bhojpuri and Magahi).
91
-
92
- For this task, participants were provided with a dataset of approximately 15,000 sentences in each language, mainly from the domain of literature, published over the web as well as in print.
93
-
94
- ### Supported Tasks and Leaderboards
95
-
96
- [More Information Needed]
97
-
98
- ### Languages
99
-
100
- Hindi, Braj Bhasha, Awadhi, Bhojpuri and Magahi
101
-
102
- ## Dataset Structure
103
-
104
- ### Data Instances
105
-
106
- ```
107
- {
108
- "language_id": 4,
109
- "text": 'तभी बारिश हुई थी जिसका गीलापन इन मूर्तियों को इन तस्वीरों में एक अलग रूप देता है .'
110
- }
111
- ```
112
-
113
- ### Data Fields
114
-
115
- - `text`: text which you want to classify
116
- - `language_id`: label for the text as an integer from 0 to 4
117
- The language ids correspond to the following languages: "AWA", "BRA", "MAG", "BHO", "HIN".
118
-
119
- ### Data Splits
120
-
121
- | | train | valid | test |
122
- |----------------------|-------|-------|-------|
123
- | # of input sentences | 70351 | 9692 | 10329 |
124
-
125
- ## Dataset Creation
126
-
127
- ### Curation Rationale
128
-
129
- [More Information Needed]
130
-
131
- ### Source Data
132
-
133
- The data for this task was collected from both hard printed and digital sources. Printed materials were
134
- obtained from different institutions that promote these languages. We also gathered data from libraries,
135
- as well as from local literary and cultural groups. We collected printed stories, novels and essays in
136
- books, magazines, and newspapers.
137
-
138
- #### Initial Data Collection and Normalization
139
-
140
- We scanned the printed materials, then we performed OCR, and
141
- finally we asked native speakers of the respective languages to correct the OCR output. Since there are
142
- no specific OCR models available for these languages, we used the Google OCR for Hindi, part of the
143
- Drive API. Since all the languages used the Devanagari script, we expected the OCR to work reasonably
144
- well, and overall it did. We further managed to get some blogs in Magahi and Bhojpuri.
145
-
146
- #### Who are the source language producers?
147
-
148
- [More Information Needed]
149
-
150
- ### Annotations
151
-
152
- #### Annotation process
153
-
154
- [More Information Needed]
155
-
156
- #### Who are the annotators?
157
-
158
- [More Information Needed]
159
-
160
- ### Personal and Sensitive Information
161
-
162
- [More Information Needed]
163
-
164
- ## Considerations for Using the Data
165
-
166
- ### Social Impact of Dataset
167
-
168
- [More Information Needed]
169
-
170
- ### Discussion of Biases
171
-
172
- [More Information Needed]
173
-
174
- ### Other Known Limitations
175
-
176
- [More Information Needed]
177
-
178
- ## Additional Information
179
-
180
- ### Dataset Curators
181
-
182
- [More Information Needed]
183
-
184
- ### Licensing Information
185
-
186
- This work is licensed under a Creative Commons Attribution 4.0 International License: http://creativecommons.org/licenses/by/4.0/
187
-
188
- ### Citation Information
189
-
190
- ```
191
- @inproceedings{zampieri-etal-2018-language,
192
- title = "Language Identification and Morphosyntactic Tagging: The Second {V}ar{D}ial Evaluation Campaign",
193
- author = {Zampieri, Marcos and
194
- Malmasi, Shervin and
195
- Nakov, Preslav and
196
- Ali, Ahmed and
197
- Shon, Suwon and
198
- Glass, James and
199
- Scherrer, Yves and
200
- Samard{\v{z}}i{\'c}, Tanja and
201
- Ljube{\v{s}}i{\'c}, Nikola and
202
- Tiedemann, J{\"o}rg and
203
- van der Lee, Chris and
204
- Grondelaers, Stefan and
205
- Oostdijk, Nelleke and
206
- Speelman, Dirk and
207
- van den Bosch, Antal and
208
- Kumar, Ritesh and
209
- Lahiri, Bornini and
210
- Jain, Mayank},
211
- booktitle = "Proceedings of the Fifth Workshop on {NLP} for Similar Languages, Varieties and Dialects ({V}ar{D}ial 2018)",
212
- month = aug,
213
- year = "2018",
214
- address = "Santa Fe, New Mexico, USA",
215
- publisher = "Association for Computational Linguistics",
216
- url = "https://aclanthology.org/W18-3901",
217
- pages = "1--17",
218
- }
219
- ```
220
-
221
- ### Contributions
222
-
223
- Thanks to [@vasudevgupta7](https://github.com/vasudevgupta7) for adding this dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dataset_infos.json DELETED
@@ -1 +0,0 @@
1
- {"default": {"description": "This dataset is introduced in a task which aimed at identifying 5 closely-related languages of Indo-Aryan language family \u2013\nHindi (also known as Khari Boli), Braj Bhasha, Awadhi, Bhojpuri, and Magahi.\n", "citation": "\\\n@inproceedings{zampieri-etal-2018-language,\n title = \"Language Identification and Morphosyntactic Tagging: The Second {V}ar{D}ial Evaluation Campaign\",\n author = {Zampieri, Marcos and\n Malmasi, Shervin and\n Nakov, Preslav and\n Ali, Ahmed and\n Shon, Suwon and\n Glass, James and\n Scherrer, Yves and\n Samard{\\v{z}}i{\\'c}, Tanja and\n Ljube{\\v{s}}i{\\'c}, Nikola and\n Tiedemann, J{\\\"o}rg and\n van der Lee, Chris and\n Grondelaers, Stefan and\n Oostdijk, Nelleke and\n Speelman, Dirk and\n van den Bosch, Antal and\n Kumar, Ritesh and\n Lahiri, Bornini and\n Jain, Mayank},\n booktitle = \"Proceedings of the Fifth Workshop on {NLP} for Similar Languages, Varieties and Dialects ({V}ar{D}ial 2018)\",\n month = aug,\n year = \"2018\",\n address = \"Santa Fe, New Mexico, USA\",\n publisher = \"Association for Computational Linguistics\",\n url = \"https://aclanthology.org/W18-3901\",\n pages = \"1--17\",\n}\n", "homepage": "https://github.com/kmi-linguistics/vardial2018", "license": "", "features": {"language_id": {"num_classes": 5, "names": ["AWA", "BRA", "MAG", "BHO", "HIN"], "id": null, "_type": "ClassLabel"}, "text": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": [{"task": "text-classification", "text_column": "text", "label_column": "language_id"}], "builder_name": "ilist", "config_name": "default", "version": {"version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 14362998, "num_examples": 70351, "dataset_name": "ilist"}, "test": {"name": "test", "num_bytes": 2146857, "num_examples": 9692, "dataset_name": "ilist"}, "validation": {"name": "validation", "num_bytes": 2407643, "num_examples": 10329, "dataset_name": "ilist"}}, "download_checksums": {"https://raw.githubusercontent.com/kmi-linguistics/vardial2018/master/dataset/train.txt": {"num_bytes": 13870509, "checksum": "1bd3ae96dc17ce44278cff256972649b510d6d8595f420e95bc8284f207e2678"}, "https://raw.githubusercontent.com/kmi-linguistics/vardial2018/master/dataset/gold.txt": {"num_bytes": 2079009, "checksum": "72909da09ed1c1f3710c879ca5b69282e483ce60fe7d90497cfbca46016da704"}, "https://raw.githubusercontent.com/kmi-linguistics/vardial2018/master/dataset/dev.txt": {"num_bytes": 2335332, "checksum": "2ef7944502bb2ee49358873e5d9de241f0a8a8b8a9b88e3e8c37873afd783797"}}, "download_size": 18284850, "post_processing_size": null, "dataset_size": 18917498, "size_in_bytes": 37202348}}
 
 
default/ilist-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d339760541c427804aec62ac505dbf0f235006aeaf28e15a5bd103ad156c9b65
3
+ size 993641
default/ilist-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:44223127415e2cf0491655f4027d5bb1b09285f852b4c75e21d4e43b39a1dd9f
3
+ size 6611425
default/ilist-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ea5320e9f01ec0d6662adeaf4e9f1c029d2efd2fd2dd423b9929faca6be289e0
3
+ size 1092609
ilist.py DELETED
@@ -1,117 +0,0 @@
1
- # coding=utf-8
2
- # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
- #
4
- # Licensed under the Apache License, Version 2.0 (the "License");
5
- # you may not use this file except in compliance with the License.
6
- # You may obtain a copy of the License at
7
- #
8
- # http://www.apache.org/licenses/LICENSE-2.0
9
- #
10
- # Unless required by applicable law or agreed to in writing, software
11
- # distributed under the License is distributed on an "AS IS" BASIS,
12
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- # See the License for the specific language governing permissions and
14
- # limitations under the License.
15
- """Indo-Aryan Language Identification Shared Task Dataset"""
16
-
17
-
18
- import datasets
19
- from datasets.tasks import TextClassification
20
-
21
-
22
- _CITATION = r"""\
23
- @inproceedings{zampieri-etal-2018-language,
24
- title = "Language Identification and Morphosyntactic Tagging: The Second {V}ar{D}ial Evaluation Campaign",
25
- author = {Zampieri, Marcos and
26
- Malmasi, Shervin and
27
- Nakov, Preslav and
28
- Ali, Ahmed and
29
- Shon, Suwon and
30
- Glass, James and
31
- Scherrer, Yves and
32
- Samard{\v{z}}i{\'c}, Tanja and
33
- Ljube{\v{s}}i{\'c}, Nikola and
34
- Tiedemann, J{\"o}rg and
35
- van der Lee, Chris and
36
- Grondelaers, Stefan and
37
- Oostdijk, Nelleke and
38
- Speelman, Dirk and
39
- van den Bosch, Antal and
40
- Kumar, Ritesh and
41
- Lahiri, Bornini and
42
- Jain, Mayank},
43
- booktitle = "Proceedings of the Fifth Workshop on {NLP} for Similar Languages, Varieties and Dialects ({V}ar{D}ial 2018)",
44
- month = aug,
45
- year = "2018",
46
- address = "Santa Fe, New Mexico, USA",
47
- publisher = "Association for Computational Linguistics",
48
- url = "https://aclanthology.org/W18-3901",
49
- pages = "1--17",
50
- }
51
- """
52
-
53
- _DESCRIPTION = """\
54
- This dataset is introduced in a task which aimed at identifying 5 closely-related languages of Indo-Aryan language family –
55
- Hindi (also known as Khari Boli), Braj Bhasha, Awadhi, Bhojpuri, and Magahi.
56
- """
57
-
58
- _URL = "https://raw.githubusercontent.com/kmi-linguistics/vardial2018/master/dataset/{}.txt"
59
-
60
-
61
- class Ilist(datasets.GeneratorBasedBuilder):
62
- def _info(self):
63
- return datasets.DatasetInfo(
64
- description=_DESCRIPTION,
65
- features=datasets.Features(
66
- {
67
- "language_id": datasets.ClassLabel(names=["AWA", "BRA", "MAG", "BHO", "HIN"]),
68
- "text": datasets.Value("string"),
69
- }
70
- ),
71
- supervised_keys=None,
72
- homepage="https://github.com/kmi-linguistics/vardial2018",
73
- citation=_CITATION,
74
- task_templates=[TextClassification(text_column="text", label_column="language_id")],
75
- )
76
-
77
- def _split_generators(self, dl_manager):
78
- filepaths = dl_manager.download_and_extract(
79
- {
80
- "train": _URL.format("train"),
81
- "test": _URL.format("gold"),
82
- "dev": _URL.format("dev"),
83
- }
84
- )
85
-
86
- return [
87
- datasets.SplitGenerator(
88
- name=datasets.Split.TRAIN,
89
- # These kwargs will be passed to _generate_examples
90
- gen_kwargs={
91
- "filepath": filepaths["train"],
92
- },
93
- ),
94
- datasets.SplitGenerator(
95
- name=datasets.Split.TEST,
96
- # These kwargs will be passed to _generate_examples
97
- gen_kwargs={
98
- "filepath": filepaths["test"],
99
- },
100
- ),
101
- datasets.SplitGenerator(
102
- name=datasets.Split.VALIDATION,
103
- # These kwargs will be passed to _generate_examples
104
- gen_kwargs={
105
- "filepath": filepaths["dev"],
106
- },
107
- ),
108
- ]
109
-
110
- def _generate_examples(self, filepath):
111
- """Yields examples."""
112
- with open(filepath, "r", encoding="utf-8") as file:
113
- for idx, row in enumerate(file):
114
- row = row.strip("\n").split("\t")
115
- if len(row) == 1:
116
- continue
117
- yield idx, {"language_id": row[1], "text": row[0]}