File size: 5,066 Bytes
da975ed
 
 
 
 
 
93c6676
da975ed
7192f11
da975ed
7192f11
da975ed
93c6676
da975ed
93c6676
da975ed
7192f11
da975ed
 
 
 
855fef0
da975ed
7192f11
93c6676
7192f11
 
 
 
 
 
 
 
 
 
da975ed
 
7192f11
 
 
 
 
 
 
da975ed
7192f11
da975ed
26ac5d6
7192f11
26ac5d6
da975ed
7192f11
26ac5d6
 
7192f11
da975ed
26ac5d6
 
 
 
 
da975ed
7192f11
26ac5d6
 
7192f11
 
26ac5d6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7192f11
da975ed
7192f11
 
da975ed
7192f11
 
 
 
 
 
 
 
 
 
 
 
 
 
855fef0
7192f11
 
 
 
855fef0
 
 
 
 
 
 
 
 
 
 
 
7192f11
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
---
tags:
- dna
- human_genome
---

# GENA-LM (gena-lm-bigbird-base-sparse-t2t)

GENA-LM is a Family of Open-Source Foundational Models for Long DNA Sequences.

GENA-LM models are transformer masked language models trained on human DNA sequence. 

`gena-lm-bigbird-base-sparse-t2t` follows the BigBird architecture and uses sparse attention from DeepSpeed.

Differences between GENA-LM (`gena-lm-bigbird-base-sparse-t2t`) and DNABERT:
- BPE tokenization instead of k-mers;
- input sequence size is about 36000 nucleotides (4096 BPE tokens) compared to 512 nucleotides of DNABERT;
- pre-training on T2T vs. GRCh38.p13 human genome assembly.

Source code and data: https://github.com/AIRI-Institute/GENA_LM

Paper: https://academic.oup.com/nar/article/53/2/gkae1310/7954523

## Installation
`gena-lm-bigbird-base-sparse-t2t` sparse ops require DeepSpeed.

### DeepSpeed
DeepSpeed installation is needed to work with SparseAttention versions of language models. DeepSpeed Sparse attention supports only GPUs with compute compatibility >= 7 (V100, T4, A100).
```bash
pip install triton==1.0.0
DS_BUILD_SPARSE_ATTN=1 pip install deepspeed==0.6.0 --global-option="build_ext" --global-option="-j8" --no-cache
```
and check installation with
```bash
ds_report
```

### APEX for FP16
Install APEX https://github.com/NVIDIA/apex#quick-start
```
git clone https://github.com/NVIDIA/apex
cd apex
pip install -v --disable-pip-version-check --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" ./
```

## Examples

### How to load pre-trained model for Masked Language Modeling
```python
from transformers import AutoTokenizer, AutoModel

tokenizer = AutoTokenizer.from_pretrained('AIRI-Institute/gena-lm-bigbird-base-sparse-t2t')
model = AutoModel.from_pretrained('AIRI-Institute/gena-lm-bigbird-base-sparse-t2t', trust_remote_code=True)

```

### How to load pre-trained model to fine-tune it on classification task
Get model class from GENA-LM repository:
```bash
git clone https://github.com/AIRI-Institute/GENA_LM.git
```

```python
from GENA_LM.src.gena_lm.modeling_bert import BertForSequenceClassification
from transformers import AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('AIRI-Institute/gena-lm-bigbird-base-sparse-t2t')
model = BertForSequenceClassification.from_pretrained('AIRI-Institute/gena-lm-bigbird-base-sparse-t2t')
```
or you can just download [modeling_bert.py](https://github.com/AIRI-Institute/GENA_LM/tree/main/src/gena_lm) and put it close to your code.

OR you can get model class from HuggingFace AutoModel:
```python
from transformers import AutoTokenizer, AutoModel
model = AutoModel.from_pretrained('AIRI-Institute/gena-lm-bigbird-base-sparse-t2t', trust_remote_code=True)
gena_module_name = model.__class__.__module__
print(gena_module_name)
import importlib
# available class names:
# - BertModel, BertForPreTraining, BertForMaskedLM, BertForNextSentencePrediction,
# - BertForSequenceClassification, BertForMultipleChoice, BertForTokenClassification,
# - BertForQuestionAnswering
# check https://huggingface.co/docs/transformers/model_doc/bert
cls = getattr(importlib.import_module(gena_module_name), 'BertForSequenceClassification')
print(cls)
model = cls.from_pretrained('AIRI-Institute/gena-lm-bigbird-base-sparse-t2t', num_labels=2)
```

## Model description
GENA-LM (`gena-lm-bigbird-base-sparse-t2t`) model is trained in a masked language model (MLM) fashion, following the methods proposed in the BigBird paper by masking 15% of tokens. Model config for `gena-lm-bigbird-base-sparse-t2t` is similar to the `google/bigbird-roberta-base`:

- 4096 Maximum sequence length
- 12 Layers, 12 Attention heads
- 768 Hidden size
- sparse config:
    - block size: 64
    - random blocks: 3
    - global blocks: 2
    - sliding window blocks: 3
- Rotary positional embeddings
- 32k Vocabulary size, tokenizer trained on DNA data.

We pre-trained `gena-lm-bigbird-base-sparse-t2t` using the latest T2T human genome assembly (https://www.ncbi.nlm.nih.gov/assembly/GCA_009914755.3/). The data was augmented by sampling mutations from 1000-genome SNPs (gnomAD dataset). Pre-training was performed for 800,000 iterations with batch size 256. We modified Transformer with [Pre-Layer normalization](https://arxiv.org/abs/2002.04745).

## Evaluation
For evaluation results, see our paper: https://academic.oup.com/nar/article/53/2/gkae1310/7954523

## Citation
```bibtex
@article{GENA_LM,
    author = {Fishman, Veniamin and Kuratov, Yuri and Shmelev, Aleksei and Petrov, Maxim and Penzar, Dmitry and Shepelin, Denis and Chekanov, Nikolay and Kardymon, Olga and Burtsev, Mikhail},
    title = {GENA-LM: a family of open-source foundational DNA language models for long sequences},
    journal = {Nucleic Acids Research},
    volume = {53},
    number = {2},
    pages = {gkae1310},
    year = {2025},
    month = {01},
    issn = {0305-1048},
    doi = {10.1093/nar/gkae1310},
    url = {https://doi.org/10.1093/nar/gkae1310},
    eprint = {https://academic.oup.com/nar/article-pdf/53/2/gkae1310/61443229/gkae1310.pdf},
}
```