--- language: - en license: apache-2.0 dataset_info: features: - name: type dtype: string - name: text dtype: string - name: created_at dtype: string - name: author dtype: string - name: author_did dtype: string - name: uri dtype: string - name: embedded_array list: - name: alt dtype: string - name: blob dtype: string - name: type dtype: string - name: langs sequence: string - name: reply_to dtype: string splits: - name: train num_bytes: 1213522565 num_examples: 3459856 download_size: 723842569 dataset_size: 1213522565 configs: - config_name: default data_files: - split: train path: data/train-* --- # Five Million bluesky posts ![image/png](https://cdn-uploads.huggingface.co/production/uploads/674783a7c6317bfd72b33659/YSHpZ-KDe4v9yGnDt6c6Y.png) This dataset contains 5 million public posts collected from Bluesky Social's firehose API, intended for machine learning research and experimentation with social media data. This dataset was inspired by the Alpindales original 2 million posts dataset, this dataset expands on that dataset with much more data. Alpins dataset did not get author handles or image urls & metadata that was included in the posts. The images and their captions could potenically be invaluble for training so they have been collected. This is the small version of the dataset to come for testing with formatting/smaller projects. This dataset is my own and is unaffiliated with bluesky or any potential employer. ## Dataset Structure ![image/png](https://cdn-uploads.huggingface.co/production/uploads/674783a7c6317bfd72b33659/9FA7LTPkffQDwrSL4F2z5.png) - **Curated by:** Roro - **License:** MIT ## Uses The dataset could be used for: - Study social media trends - Research on social media content moderation - Studying conversation structures and reply networks I have not been able to figure out how to parse the atproto image ref bytes into a image or blob url. I would appreciate a PR for that. ## Dataset Curation The dataset not is filtered, sorting the dataset for quality or moderation may make it more valuable for your use cases. The dataset is as-is and no liablity is provided. There likley are duplicates, however deduping was done for each batch (1 million posts) so the amount is likley negligible.