Datasets:
Roronotalt
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -41,20 +41,20 @@ configs:
|
|
41 |
path: data/train-*
|
42 |
---
|
43 |
|
44 |
-
#
|
45 |
|
46 |
|
47 |
-
![image/png](https://cdn-uploads.huggingface.co/production/uploads/674783a7c6317bfd72b33659/
|
48 |
|
49 |
<!-- Provide a quick summary of the dataset. -->
|
50 |
|
51 |
-
This dataset contains
|
52 |
|
53 |
This dataset was inspired by the Alpindales original 2 million posts dataset, this dataset expands on that dataset with much more data.
|
54 |
|
55 |
Alpins dataset did not get author handles or image urls & metadata that was included in the posts. The images and their captions could potenically be invaluble for training so they have been collected.
|
56 |
|
57 |
-
|
58 |
|
59 |
This dataset is my own and is unaffiliated with bluesky or any potential employer.
|
60 |
|
@@ -78,13 +78,71 @@ The dataset could be used for:
|
|
78 |
- Research on social media content moderation
|
79 |
- Studying conversation structures and reply networks
|
80 |
|
81 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
82 |
|
83 |
## Dataset Curation
|
84 |
|
85 |
The dataset not is filtered, sorting the dataset for quality or moderation may make it more valuable for your use cases. The dataset is as-is and no liablity is provided.
|
86 |
-
There likley are duplicates, however deduping was done for each batch (1 million posts) so the amount is likley negligible.
|
87 |
|
|
|
88 |
```Bibtex
|
89 |
@article{roronotalt_bluesky,
|
90 |
author = {Roronotalt},
|
|
|
41 |
path: data/train-*
|
42 |
---
|
43 |
|
44 |
+
# Five Million bluesky posts
|
45 |
|
46 |
|
47 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/674783a7c6317bfd72b33659/bkAI0CJNPcZMrrP7VgGIC.png)
|
48 |
|
49 |
<!-- Provide a quick summary of the dataset. -->
|
50 |
|
51 |
+
This dataset contains 5 million public posts collected from Bluesky Social's firehose API, intended for machine learning research and experimentation with social media data.
|
52 |
|
53 |
This dataset was inspired by the Alpindales original 2 million posts dataset, this dataset expands on that dataset with much more data.
|
54 |
|
55 |
Alpins dataset did not get author handles or image urls & metadata that was included in the posts. The images and their captions could potenically be invaluble for training so they have been collected.
|
56 |
|
57 |
+
This is the small version of the dataset to come for testing with formatting/smaller projects.
|
58 |
|
59 |
This dataset is my own and is unaffiliated with bluesky or any potential employer.
|
60 |
|
|
|
78 |
- Research on social media content moderation
|
79 |
- Studying conversation structures and reply networks
|
80 |
|
81 |
+
|
82 |
+
### Loading dataset normally
|
83 |
+
The dataset is meant to be downloaded with the huggingface load_dataset() function. From there you can either run the dataset as a iterable stream so you do not have to worry about memory or you can convert to a pandas dataframe.
|
84 |
+
|
85 |
+
Note that you will need the to install the following libraries:
|
86 |
+
```bash
|
87 |
+
pip install pandas pyarrow datasets huggingface_hub
|
88 |
+
```
|
89 |
+
|
90 |
+
To download/load the huggingface dataset:
|
91 |
+
```python
|
92 |
+
from datasets import load_dataset
|
93 |
+
dataset = load_dataset("Roronotalt/bluesky", split="train")
|
94 |
+
```
|
95 |
+
|
96 |
+
To pandas:
|
97 |
+
```python
|
98 |
+
new_dataset = dataset.to_pandas()
|
99 |
+
```
|
100 |
+
|
101 |
+
You can then save the pandas dataframe as a csv.
|
102 |
+
|
103 |
+
Alternativley if you download the provided dataset parquet file in /data, you can convert the file to a csv using the following python code:
|
104 |
+
|
105 |
+
```bash
|
106 |
+
python -c "import pandas as pd;
|
107 |
+
df = http://pd.read_parquet('train-0000.parquet', engine='pyarrow');
|
108 |
+
http://df.to_csv('output_file.csv', index=False)
|
109 |
+
"
|
110 |
+
```
|
111 |
+
Credit to @TyrantsMuse on twitter for the code snippet, @fr3fou for advice on compression, and @wavefnx for decoding the image bytes.
|
112 |
+
|
113 |
+
### Loading the dataset images
|
114 |
+
The dataset stores the bytes for a CID that can be used in conjuction with the author DID to get image blob URL from bluesky. The URL may not be valid.
|
115 |
+
|
116 |
+
First you need the bluesky ATPROTO library:
|
117 |
+
```bash
|
118 |
+
pip install atproto
|
119 |
+
```
|
120 |
+
|
121 |
+
For this snippet it is assumed that you have already loaded the dataset thus, it is up to you to get the parts of the post mentioned.
|
122 |
+
Then you can decode the image to a URL
|
123 |
+
```python
|
124 |
+
from atproto import CID
|
125 |
+
|
126 |
+
# Image "blob", every dict in the embedded_array should have one
|
127 |
+
encoded_string = image["blob"]
|
128 |
+
# Post author DID, every post should have one
|
129 |
+
author_did = post["author_did"]
|
130 |
+
|
131 |
+
# I formatted a bit wrong so you have to fix formatting, whoops :p
|
132 |
+
if encoded_string.startswith("b'") and encoded_string.endswith("'"):
|
133 |
+
encoded_string = encoded_string[2:-1]
|
134 |
+
|
135 |
+
# Bluesky image blob URL
|
136 |
+
url= f"https://bsky.social/xrpc/com.atproto.sync.getBlob?did={author_did}&cid={CID.decode(base64.b64decode(encoded_string))}"
|
137 |
+
# Caption for image if one exists or empty string
|
138 |
+
captions = image["alt"]
|
139 |
+
```
|
140 |
|
141 |
## Dataset Curation
|
142 |
|
143 |
The dataset not is filtered, sorting the dataset for quality or moderation may make it more valuable for your use cases. The dataset is as-is and no liablity is provided.
|
|
|
144 |
|
145 |
+
Deduping was done based on the post URIs. The dataset is sorted by the author column.
|
146 |
```Bibtex
|
147 |
@article{roronotalt_bluesky,
|
148 |
author = {Roronotalt},
|