holylovenia commited on
Commit
54d2445
·
verified ·
1 Parent(s): f8b02a3

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +53 -19
README.md CHANGED
@@ -1,26 +1,62 @@
 
1
  ---
2
- tags:
3
- - sentiment-analysis
4
- language:
5
  - ind
 
 
 
 
 
6
  ---
7
 
8
- # id_abusive
9
-
10
  The ID_ABUSIVE dataset is collection of 2,016 informal abusive tweets in Indonesian language,
11
-
12
  designed for sentiment analysis NLP task. This dataset is crawled from Twitter, and then filtered
13
-
14
  and labelled manually by 20 volunteer annotators. The dataset labelled into three labels namely
15
-
16
  not abusive language, abusive but not offensive, and offensive language.
17
 
 
 
 
 
 
 
 
 
 
18
  ## Dataset Usage
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
19
 
20
- Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
 
 
 
 
 
 
 
 
 
 
21
 
22
  ## Citation
23
 
 
24
  ```
25
  @article{IBROHIM2018222,
26
  title = {A Dataset and Preliminaries Study for Abusive Language Detection in Indonesian Social Media},
@@ -36,16 +72,14 @@ author = {Muhammad Okky Ibrohim and Indra Budi},
36
  keywords = {abusive language, twitter, machine learning},
37
  abstract = {Abusive language is an expression (both oral or text) that contains abusive/dirty words or phrases both in the context of jokes, a vulgar sex conservation or to cursing someone. Nowadays many people on the internet (netizens) write and post an abusive language in the social media such as Facebook, Line, Twitter, etc. Detecting an abusive language in social media is a difficult problem to resolve because this problem can not be resolved just use word matching. This paper discusses a preliminaries study for abusive language detection in Indonesian social media and the challenge in developing a system for Indonesian abusive language detection, especially in social media. We also built reported an experiment for abusive language detection on Indonesian tweet using machine learning approach with a simple word n-gram and char n-gram features. We use Naive Bayes, Support Vector Machine, and Random Forest Decision Tree classifier to identify the tweet whether the tweet is a not abusive language, abusive but not offensive, or offensive language. The experiment results show that the Naive Bayes classifier with the combination of word unigram + bigrams features gives the best result i.e. 70.06% of F1 - Score. However, if we classifying the tweet into two labels only (not abusive language and abusive language), all classifier that we used gives a higher result (more than 83% of F1 - Score for every classifier). The dataset in this experiment is available for other researchers that interest to improved this study.}
38
  }
39
- ```
40
-
41
- ## License
42
-
43
- Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International
44
 
45
- ## Homepage
46
 
47
- [https://www.sciencedirect.com/science/article/pii/S1877050918314583](https://www.sciencedirect.com/science/article/pii/S1877050918314583)
48
-
49
- ### NusaCatalogue
 
 
 
 
50
 
51
- For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue)
 
1
+
2
  ---
3
+ language:
 
 
4
  - ind
5
+ pretty_name: Id Abusive
6
+ task_categories:
7
+ - sentiment-analysis
8
+ tags:
9
+ - sentiment-analysis
10
  ---
11
 
 
 
12
  The ID_ABUSIVE dataset is collection of 2,016 informal abusive tweets in Indonesian language,
 
13
  designed for sentiment analysis NLP task. This dataset is crawled from Twitter, and then filtered
 
14
  and labelled manually by 20 volunteer annotators. The dataset labelled into three labels namely
 
15
  not abusive language, abusive but not offensive, and offensive language.
16
 
17
+
18
+ ## Languages
19
+
20
+ ind
21
+
22
+ ## Supported Tasks
23
+
24
+ Sentiment Analysis
25
+
26
  ## Dataset Usage
27
+ ### Using `datasets` library
28
+ ```
29
+ from datasets import load_dataset
30
+ dset = datasets.load_dataset("SEACrowd/id_abusive", trust_remote_code=True)
31
+ ```
32
+ ### Using `seacrowd` library
33
+ ```import seacrowd as sc
34
+ # Load the dataset using the default config
35
+ dset = sc.load_dataset("id_abusive", schema="seacrowd")
36
+ # Check all available subsets (config names) of the dataset
37
+ print(sc.available_config_names("id_abusive"))
38
+ # Load the dataset using a specific config
39
+ dset = sc.load_dataset_by_config_name(config_name="<config_name>")
40
+ ```
41
+
42
+ More details on how to load the `seacrowd` library can be found [here](https://github.com/SEACrowd/seacrowd-datahub?tab=readme-ov-file#how-to-use).
43
+
44
 
45
+ ## Dataset Homepage
46
+
47
+ [https://www.sciencedirect.com/science/article/pii/S1877050918314583](https://www.sciencedirect.com/science/article/pii/S1877050918314583)
48
+
49
+ ## Dataset Version
50
+
51
+ Source: 1.0.0. SEACrowd: 2024.06.20.
52
+
53
+ ## Dataset License
54
+
55
+ Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International
56
 
57
  ## Citation
58
 
59
+ If you are using the **Id Abusive** dataloader in your work, please cite the following:
60
  ```
61
  @article{IBROHIM2018222,
62
  title = {A Dataset and Preliminaries Study for Abusive Language Detection in Indonesian Social Media},
 
72
  keywords = {abusive language, twitter, machine learning},
73
  abstract = {Abusive language is an expression (both oral or text) that contains abusive/dirty words or phrases both in the context of jokes, a vulgar sex conservation or to cursing someone. Nowadays many people on the internet (netizens) write and post an abusive language in the social media such as Facebook, Line, Twitter, etc. Detecting an abusive language in social media is a difficult problem to resolve because this problem can not be resolved just use word matching. This paper discusses a preliminaries study for abusive language detection in Indonesian social media and the challenge in developing a system for Indonesian abusive language detection, especially in social media. We also built reported an experiment for abusive language detection on Indonesian tweet using machine learning approach with a simple word n-gram and char n-gram features. We use Naive Bayes, Support Vector Machine, and Random Forest Decision Tree classifier to identify the tweet whether the tweet is a not abusive language, abusive but not offensive, or offensive language. The experiment results show that the Naive Bayes classifier with the combination of word unigram + bigrams features gives the best result i.e. 70.06% of F1 - Score. However, if we classifying the tweet into two labels only (not abusive language and abusive language), all classifier that we used gives a higher result (more than 83% of F1 - Score for every classifier). The dataset in this experiment is available for other researchers that interest to improved this study.}
74
  }
 
 
 
 
 
75
 
 
76
 
77
+ @article{lovenia2024seacrowd,
78
+ title={SEACrowd: A Multilingual Multimodal Data Hub and Benchmark Suite for Southeast Asian Languages},
79
+ author={Holy Lovenia and Rahmad Mahendra and Salsabil Maulana Akbar and Lester James V. Miranda and Jennifer Santoso and Elyanah Aco and Akhdan Fadhilah and Jonibek Mansurov and Joseph Marvin Imperial and Onno P. Kampman and Joel Ruben Antony Moniz and Muhammad Ravi Shulthan Habibi and Frederikus Hudi and Railey Montalan and Ryan Ignatius and Joanito Agili Lopo and William Nixon and Börje F. Karlsson and James Jaya and Ryandito Diandaru and Yuze Gao and Patrick Amadeus and Bin Wang and Jan Christian Blaise Cruz and Chenxi Whitehouse and Ivan Halim Parmonangan and Maria Khelli and Wenyu Zhang and Lucky Susanto and Reynard Adha Ryanda and Sonny Lazuardi Hermawan and Dan John Velasco and Muhammad Dehan Al Kautsar and Willy Fitra Hendria and Yasmin Moslem and Noah Flynn and Muhammad Farid Adilazuarda and Haochen Li and Johanes Lee and R. Damanhuri and Shuo Sun and Muhammad Reza Qorib and Amirbek Djanibekov and Wei Qi Leong and Quyet V. Do and Niklas Muennighoff and Tanrada Pansuwan and Ilham Firdausi Putra and Yan Xu and Ngee Chia Tai and Ayu Purwarianti and Sebastian Ruder and William Tjhi and Peerat Limkonchotiwat and Alham Fikri Aji and Sedrick Keh and Genta Indra Winata and Ruochen Zhang and Fajri Koto and Zheng-Xin Yong and Samuel Cahyawijaya},
80
+ year={2024},
81
+ eprint={2406.10118},
82
+ journal={arXiv preprint arXiv: 2406.10118}
83
+ }
84
 
85
+ ```