--- dataset_info: features: - name: input_ids sequence: int32 - name: aa_seqs dtype: string splits: - name: train num_bytes: 61101706188 num_examples: 9920628 download_size: 5540646354 dataset_size: 61101706188 configs: - config_name: default data_files: - split: train path: data/train-* --- 10 million random examples from Uniref50 representative sequences (October 2023) and computed [selfies](https://github.com/aspuru-guzik-group/selfies) strings. The strings are stored as input ids from a custom selfies tokenizer. A BERT tokenizer with this vocabulary has been uploaded to this dataset under the files. Intended for atom-wise protein language modeling.