File size: 4,738 Bytes
a8678ab
648a3f8
 
a8678ab
 
 
 
16e39b7
a8678ab
16e39b7
a8678ab
16e39b7
a8678ab
16e39b7
a8678ab
16e39b7
a8678ab
16e39b7
a8678ab
16e39b7
a8678ab
16e39b7
a8678ab
16e39b7
a8678ab
16e39b7
a8678ab
16e39b7
a8678ab
16e39b7
a8678ab
 
1b33482
 
 
 
a8678ab
 
 
 
 
 
 
688be26
a8678ab
ff9a0d6
688be26
a8678ab
 
 
688be26
a8678ab
 
5055783
 
 
688be26
a8678ab
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
688be26
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0503b87
688be26
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a8678ab
 
 
810e03b
e9c5ea3
688be26
e9c5ea3
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
---
language:
- en
license: apache-2.0
dataset_info:
  features:
  - name: type
    dtype: large_string
  - name: text
    dtype: large_string
  - name: created_at
    dtype: large_string
  - name: author
    dtype: large_string
  - name: author_did
    dtype: large_string
  - name: uri
    dtype: large_string
  - name: embedded_array
    large_list:
    - name: alt
      dtype: large_string
    - name: blob
      dtype: large_string
    - name: type
      dtype: large_string
  - name: langs
    large_list: large_string
  - name: reply_to
    dtype: large_string
  splits:
  - name: train
    num_bytes: 43873366890
    num_examples: 94967071
  download_size: 12292775939
  dataset_size: 43873366890
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---

# Five Million bluesky posts


![image/png](https://cdn-uploads.huggingface.co/production/uploads/674783a7c6317bfd72b33659/bkAI0CJNPcZMrrP7VgGIC.png)

<!-- Provide a quick summary of the dataset. -->

This dataset contains 5 million public posts collected from Bluesky Social's firehose API, intended for machine learning research and experimentation with social media data.

This dataset was inspired by the Alpindales original 2 million posts dataset, this dataset expands on that dataset with much more data.

Alpins dataset did not get author handles or image urls & metadata that was included in the posts. The images and their captions could potenically be invaluble for training so they have been collected.

This is the small version of the dataset to come for testing with formatting/smaller projects.

This dataset is my own and is unaffiliated with bluesky or any potential employer.


## Dataset Structure

<!-- Provide a longer summary of what this dataset is. -->


![image/png](https://cdn-uploads.huggingface.co/production/uploads/674783a7c6317bfd72b33659/9FA7LTPkffQDwrSL4F2z5.png)

- **Curated by:** Roro
- **License:** MIT

## Uses

<!-- Address questions around how the dataset is intended to be used. -->

The dataset could be used for:
- Study social media trends
- Research on social media content moderation
- Studying conversation structures and reply networks


### Loading dataset normally
The dataset is meant to be downloaded with the huggingface load_dataset() function. From there you can either run the dataset as a iterable stream so you do not have to worry about memory or you can convert to a pandas dataframe.

Note that you will need the to install the following libraries:
```bash
pip install pandas pyarrow datasets huggingface_hub
```

To download/load the huggingface dataset:
```python
from datasets import load_dataset
dataset = load_dataset("Roronotalt/bluesky", split="train")
```

To pandas:
```python
new_dataset = dataset.to_pandas()
```

You can then save the pandas dataframe as a csv.

Alternativley if you download the provided dataset parquet file in /data, you can convert the file to a csv using the following python code: 

```bash
python -c "import pandas as pd;
df = http://pd.read_parquet('train-0000.parquet', engine='pyarrow');
http://df.to_csv('output_file.csv', index=False) 
"
```
Credit to @TyrantsMuse on twitter for the code snippet, @fr3fou for advice on compression, and @wavefnx for decoding the image bytes.

### Loading the dataset images
The dataset stores the bytes for a CID that can be used in conjuction with the author DID to get image blob URL from bluesky. The URL may not be valid.

First you need the bluesky ATPROTO library:
```bash
pip install atproto
```

For this snippet it is assumed that you have already loaded the dataset thus, it is up to you to get the parts of the post mentioned.
Then you can decode the image to a URL
```python
from atproto import CID
import base64

# Image "blob", every dict in the embedded_array should have one
encoded_string = image["blob"]
# Post author DID, every post should have one
author_did = post["author_did"]

# I formatted a bit wrong so you have to fix formatting, whoops :p
if encoded_string.startswith("b'") and encoded_string.endswith("'"):
    encoded_string = encoded_string[2:-1]

# Bluesky image blob URL
url= f"https://bsky.social/xrpc/com.atproto.sync.getBlob?did={author_did}&cid={CID.decode(base64.b64decode(encoded_string))}"
# Caption for image if one exists or empty string
captions = image["alt"]
```

## Dataset Curation

The dataset not is filtered, sorting the dataset for quality or moderation may make it more valuable for your use cases. The dataset is as-is and no liablity is provided.

Deduping was done based on the post URIs. The dataset is sorted by the author column.
```Bibtex
@article{roronotalt_bluesky,
  author = {Roronotalt},
  title = {Bluesky Dataset},
  year = {2024}
}
```