File size: 2,147 Bytes
597c195
 
96e526f
 
 
 
 
 
597c195
 
 
 
46e456e
 
8955151
46e456e
 
 
 
 
 
5b91cd6
46e456e
86278b2
 
 
 
 
8955151
08dcd54
8955151
597c195
86278b2
cb3c91b
76d54fc
cb3c91b
 
 
597c195
 
 
 
 
 
96e526f
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
---
license: mit
task_categories:
- feature-extraction
tags:
- embeddings
size_categories:
- 100K<n<1M
---

Want to analyze some proteins, but lack embeddings? Want to perform vector similarity search? Want a context of known proteins embeddings? Look no further!

This repository is a dataset of [Reviewed Swiss-Prot Proteins](https://www.uniprot.org/help/downloads). Each protein I compute the embeddings for [ESM2](https://github.com/facebookresearch/esm) (6 layer model) and [ProteinCLIP](https://github.com/wukevin/proteinclip). 


## Specs

See the data viewer for all information. Most of the metadata on each protein and the sequences themself come from [Reviewed Swiss-Prot Proteins](https://www.uniprot.org/help/downloads).

**Important columns**
- `accession`: the Uniprot Accession Number that identifies each protein uniquely
- `embedding`: 128 dimensional vectors computed from the [ProteinCLIP](https://github.com/wukevin/proteinclip) models (first from ESM 6 layer model last layer).

## Examples

Yeah this dataset is easy to use! See some quick examples!

### Example 1

Upload all the embeddings to Nomic Atlas to create https://atlas.nomic.ai/data/donnybertucci/lackadaisical-goodfellow/map
<video controls autoplay src="https://cdn-uploads.huggingface.co/production/uploads/6260e4e99c4c9dc0ed60e8ca/xyugJj612OtpsixQy2oio.qt"></video>

### Example 2

Similarity search with cosine similarity https://github.com/xnought/DS569k-viewer (live site https://ocular.cc.gatech.edu/DS569k/)

<video controls autoplay src="https://cdn-uploads.huggingface.co/production/uploads/6260e4e99c4c9dc0ed60e8ca/gCdHAI7vSCOv_BHOCQqqc.qt"></video>

## Credit

All credit for the data goes to https://www.uniprot.org/ and https://www.expasy.org/resources/uniprotkb-swiss-prot and to the original authors of each protein. I directly took the data from them.

Large pieces of code were copied from https://github.com/wukevin/proteinclip to embed both ESM and ProteinCLIP. Without their pretrained models and code, I could not have produced the embeddings.

And credit to Fair ESM for the pretrained ESM2 models https://github.com/facebookresearch/esm.