|
--- |
|
language: |
|
- en |
|
license: apache-2.0 |
|
dataset_info: |
|
features: |
|
- name: type |
|
dtype: large_string |
|
- name: text |
|
dtype: large_string |
|
- name: created_at |
|
dtype: large_string |
|
- name: author |
|
dtype: large_string |
|
- name: author_did |
|
dtype: large_string |
|
- name: uri |
|
dtype: large_string |
|
- name: embedded_array |
|
large_list: |
|
- name: alt |
|
dtype: large_string |
|
- name: blob |
|
dtype: large_string |
|
- name: type |
|
dtype: large_string |
|
- name: langs |
|
large_list: large_string |
|
- name: reply_to |
|
dtype: large_string |
|
splits: |
|
- name: train |
|
num_bytes: 43873366890 |
|
num_examples: 94967071 |
|
download_size: 12292775939 |
|
dataset_size: 43873366890 |
|
configs: |
|
- config_name: default |
|
data_files: |
|
- split: train |
|
path: data/train-* |
|
--- |
|
|
|
# Five Million bluesky posts |
|
|
|
|
|
![image/png](https://cdn-uploads.huggingface.co/production/uploads/674783a7c6317bfd72b33659/bkAI0CJNPcZMrrP7VgGIC.png) |
|
|
|
<!-- Provide a quick summary of the dataset. --> |
|
|
|
This dataset contains 5 million public posts collected from Bluesky Social's firehose API, intended for machine learning research and experimentation with social media data. |
|
|
|
This dataset was inspired by the Alpindales original 2 million posts dataset, this dataset expands on that dataset with much more data. |
|
|
|
Alpins dataset did not get author handles or image urls & metadata that was included in the posts. The images and their captions could potenically be invaluble for training so they have been collected. |
|
|
|
This is the small version of the dataset to come for testing with formatting/smaller projects. |
|
|
|
This dataset is my own and is unaffiliated with bluesky or any potential employer. |
|
|
|
|
|
## Dataset Structure |
|
|
|
<!-- Provide a longer summary of what this dataset is. --> |
|
|
|
|
|
![image/png](https://cdn-uploads.huggingface.co/production/uploads/674783a7c6317bfd72b33659/9FA7LTPkffQDwrSL4F2z5.png) |
|
|
|
- **Curated by:** Roro |
|
- **License:** MIT |
|
|
|
## Uses |
|
|
|
<!-- Address questions around how the dataset is intended to be used. --> |
|
|
|
The dataset could be used for: |
|
- Study social media trends |
|
- Research on social media content moderation |
|
- Studying conversation structures and reply networks |
|
|
|
|
|
### Loading dataset normally |
|
The dataset is meant to be downloaded with the huggingface load_dataset() function. From there you can either run the dataset as a iterable stream so you do not have to worry about memory or you can convert to a pandas dataframe. |
|
|
|
Note that you will need the to install the following libraries: |
|
```bash |
|
pip install pandas pyarrow datasets huggingface_hub |
|
``` |
|
|
|
To download/load the huggingface dataset: |
|
```python |
|
from datasets import load_dataset |
|
dataset = load_dataset("Roronotalt/bluesky", split="train") |
|
``` |
|
|
|
To pandas: |
|
```python |
|
new_dataset = dataset.to_pandas() |
|
``` |
|
|
|
You can then save the pandas dataframe as a csv. |
|
|
|
Alternativley if you download the provided dataset parquet file in /data, you can convert the file to a csv using the following python code: |
|
|
|
```bash |
|
python -c "import pandas as pd; |
|
df = http://pd.read_parquet('train-0000.parquet', engine='pyarrow'); |
|
http://df.to_csv('output_file.csv', index=False) |
|
" |
|
``` |
|
Credit to @TyrantsMuse on twitter for the code snippet, @fr3fou for advice on compression, and @wavefnx for decoding the image bytes. |
|
|
|
### Loading the dataset images |
|
The dataset stores the bytes for a CID that can be used in conjuction with the author DID to get image blob URL from bluesky. The URL may not be valid. |
|
|
|
First you need the bluesky ATPROTO library: |
|
```bash |
|
pip install atproto |
|
``` |
|
|
|
For this snippet it is assumed that you have already loaded the dataset thus, it is up to you to get the parts of the post mentioned. |
|
Then you can decode the image to a URL |
|
```python |
|
from atproto import CID |
|
import base64 |
|
|
|
# Image "blob", every dict in the embedded_array should have one |
|
encoded_string = image["blob"] |
|
# Post author DID, every post should have one |
|
author_did = post["author_did"] |
|
|
|
# I formatted a bit wrong so you have to fix formatting, whoops :p |
|
if encoded_string.startswith("b'") and encoded_string.endswith("'"): |
|
encoded_string = encoded_string[2:-1] |
|
|
|
# Bluesky image blob URL |
|
url= f"https://bsky.social/xrpc/com.atproto.sync.getBlob?did={author_did}&cid={CID.decode(base64.b64decode(encoded_string))}" |
|
# Caption for image if one exists or empty string |
|
captions = image["alt"] |
|
``` |
|
|
|
## Dataset Curation |
|
|
|
The dataset not is filtered, sorting the dataset for quality or moderation may make it more valuable for your use cases. The dataset is as-is and no liablity is provided. |
|
|
|
Deduping was done based on the post URIs. The dataset is sorted by the author column. |
|
```Bibtex |
|
@article{roronotalt_bluesky, |
|
author = {Roronotalt}, |
|
title = {Bluesky Dataset}, |
|
year = {2024} |
|
} |
|
``` |