File size: 6,831 Bytes
7f5adb6
 
 
8413462
 
410adc2
1cd09ed
 
 
 
f5e07e1
 
7f5adb6
 
 
73a6d2f
310b1a4
eb9afd2
2da5f7d
eb9afd2
7f5adb6
813e867
7f5adb6
8ccdf83
eb9afd2
 
 
7f5adb6
8ccdc5d
b2d3ee0
e38f605
 
 
 
3f0a35a
27979c6
e38f605
 
 
 
 
 
3f0a35a
 
 
 
 
 
 
 
 
 
9a44176
3f0a35a
 
 
 
 
 
e38f605
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
695604e
 
 
 
 
02141d2
695604e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8ccdc5d
 
 
 
 
 
 
7f5adb6
a419d98
7f5adb6
a419d98
d368d95
7f5adb6
 
a419d98
7f5adb6
 
a419d98
7f5adb6
a419d98
 
 
 
7f5adb6
 
 
a419d98
7f5adb6
 
 
 
 
 
 
 
 
 
 
 
 
 
a419d98
b2d3ee0
7f5adb6
a419d98
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
---
language:
- en
pretty_name: elliquiy-rp_2023-04
size_categories:
- 100K<n<1M
configs:
- config_name: default
  data_files:
  - split: train
    path: elliquiy-rp_2023-04_*.parquet
license: cc-by-4.0
---

# Elliquiy roleplaying forum data
A collection of 6.6 million posts and 112 thousands forum threads from Elliquiy (arguably the largest and one of the oldest _adult_ roleplaying forums on the Internet), from April 2005 through April 2023. About 9 GB of uncompressed text data (including formatting tags). The data was processed from the [original source files](https://huggingface.co/datasets/lemonilia/roleplaying-forums-raw) that ended up composing a larger [raw Forum RP dataset](https://huggingface.co/datasets/lemonilia/Roleplay-Forums_2023-04) collection I also uploaded, but unlike those they don't have spacing issues between HTML tags.

<ins>Note that only _solo_ and _small-group_ roleplays ("games") and OOC threads from the "Play-By-Post" sections were scraped, not the various "social" sections.</ins>

Basic automated cleaning was performed, but the messages are still (by deliberate choice) in HTML format, with the notable exception of converting linebreaks into `\n`. In addition to the messages, some metadata was provided for user convenience, as well as alternative names that could be used instead of actual usernames (in the format of `User0`, `User1` ... `UserN`). These are unique per-thread, but not globally.

**Note**: I might update the dataset in the future as I improve the cleaning procedure or the data format.

# Limitations, quirks and issues
- During the scraping process (performed on April 2023) some information like text color and links got lost.
- Given the text formatting used by many users, automated conversion to Markdown seems very difficult without causing severe formatting problems.
- Most of the data has a sexual nature.

# Basic dataset usage
The files need PyArrow installed from `pip` to be loaded with `pandas`. FastParquet will not work properly due to the nested data structure.

```python
import pandas

# Load a parquet file into one DataFrame
df = pandas.read_parquet('elliquiy-rp_2023-04_00000-of-00006.parquet')

# Load the shareGPT-like message group from one specific row into a standard Python list
messages = list(df.iloc[2350].messages)

```

Consolidate the parquet files into one large DataFrame (requires large amounts of memory):

```python
import glob
import pandas

filenames = sorted(glob.glob('*.parquet'))
parquets = []

# Read the parquet files one by one
for file in filenames:
    parquets.append(pandas.read_parquet(file))

# Concatenate the parquet files into one DataFrame
full_df = pandas.concat(parquets)
```

Showing thread metadata from one specific row after loading the data:

```text
In [2]: df.iloc[2350]
Out[2]: 
thread-id                                                        11897
thread-title                           The League of Extraordinary ...
category-id                                                         65
category-name                             Noncon: Human-Freeform Solos
participant-count                                                    3
message-count                                                      242
word-count-total                                                 35197
word-count-median                                                136.0
messages         {'from': 'OrdinaryName', 'from-alternative': 'User...
Name: 2350, dtype: object
```

# Dataset field explanation
## Threads
| Field | Explanation
|-----|-----
| thread-id | The forum software's given thread id
| thread-date | The date of the opening post in the thread
| thread-title | User-given thread title
| category-id | The id of the subforum where the thread was posted
| category-name | The full name of the subforum. "Small Groups" subforums are dedicated to roleplays for more than two participants, and "Solo" are generally for two participants but more than two may appear here as well.
| participant-count | The number of users writing in the thread
| message-count | The total number of messages in the thread
| word-count-total | The cumulative sum of space-separated words in the thread, calculated by python's `split()` function, including HTML tags
| word-count-median | The median message length in words, calculated by python's `split()` function, including HTML tags

## Messages
| Field | Explanation
|-----|-----
| index | Message number, starting from zero at the beginning of the thread. Added mainly for debugging purposes
| from | The name of the user who wrote the message. **Avoid using** if possible
| from-alternative | Alternative, locally-unique name for the user in the form of `User0` ... `UserN`
| timestamp | ISO UTC message timestamp

# Message length analysis
Messages in older threads were in general shorter and more casually written than those in newer ones. This might be due to a slow shift over time in the userbase and/or self-selection due to access requirements.

![Median message length trend over time](https://files.catbox.moe/ogctt3.png)

![Median message length vs dataset fraction](https://files.catbox.moe/0ya6jd.png) 

# Cleaning procedure details
## At the HTML element level
- Simplified blockquotes
- Removed all attributes from most tags
- Cosmetic font sizes consolidated into three categories: `<small>`, normal, `<big>` (deprecated tag)
- Font changes removed
- Special CSS effects removed
- Background-colored text changed into `<mark>`
- Spoiler tags converted into `<details><summary>` blocks
  - However, inline spoilers don't work well with this—to be checked out at a later time
- Removed left/right "floating" `<div>`
- Removed left/right/justify text-alignment `<div>`
- Center alignment `<div>` changed to `<center>` (deprecated tag)
- Recomposed URLs and their associated text into `<a>` elements, when possible
  - The data was originally scraped using a forum function that decomposed `<a>` links into text+URL
- Tried to reduce the amount of `<table>` inappropriately used for presentation purposes
  - More work needed

## At the text level
- Converted post dates to ISO format
- Removed non-standard unicode spaces
- Changed generic spoiler text
- Removed some leftover BB tags (most often the result of user error)
  - More work needed
- Shortened some bare URLs
- Changed `elliquiy.com` URLs into `example.com`
- Removed some site-internal URLs
- Converted all smilies into emoji
- Removed excessive newlines and leading/trailing spaces
- Fixed some HTML element spacing issue
  - More work needed

## NOT done
- Replacing HTML escape characters
- Turning image URLs into `<img>`
- Balancing quote marks and other characters that are supposed to be paired
- Changing fancy punctuation to ASCII punctuation
- Removing usernames entirely from the dataset