File size: 8,918 Bytes
76668e1
80c579e
 
 
 
76668e1
80c579e
 
76668e1
80c579e
 
 
 
 
 
 
 
 
 
 
 
45c5d81
80c579e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
---

tags:
- zero-shot-image-classification
- clip
library_tag: open_clip
license: apache-2.0
library_name: open_clip
pipeline_tag: zero-shot-image-classification
---

# Model card for CLIP-UniRepLKNet_large-laion5B-s11B-b75K



#  Table of Contents

1. [Quick Start](#model-details)

2. [Model Details](#model-details)

3. [Uses](#uses)

4. [Training Details](#training-details)

5. [Evaluation](#evaluation)

6. [Citation](#citation)



# Quick Start

```python

model_large = unireplknet_l() ## provided in modeling_UniRepLKNet.py
print(model_large)

ckpt = torch.load("UniRepLKNet-L-b75k_s10B_CLIP-in1k_75.72.pt")
model_large.load_state_dict(ckpt,strict=False) 

# Since we do not need cls heads in CLIP pretraining.

print("Loaded CLIP Pretrained Models")

```

# Model Details



## Model Description



A series of CLIP UniRepLKNet models trained on LAION-2B (english), a subset of [LAION-5B](https://arxiv.org/abs/2210.08402), using [OpenCLIP](https://github.com/mlfoundations/open_clip).



| Model | Dataset | Resolution | Top-1 ImageNet Zero-Shot (%) |

| ----- | ------- | ---------- | ------------ |

| [UniRepLKNet_large.laion5b_s11b_b75k](https://huggingface.co/Yiyuan/CLIP-UniRepLKNet-L-laion5B-s10B-b75k) | LAION-5B | 224x224 | 75.7 |





The core training run was performed in pieces over a period of ~ 2 months. The global batch size for the core run was 76800. The last ~25% of training was re-done at a 320x320 image resolution. See more details in [Training Details](#training-details).



Goals:

  * Push the size of largest convolutional CLIP image tower into the performance range of ViT-g to ViT-G w/ improved image size scaling for downstream use.



Firsts:

  * Largest released ConvNeXt model pretrained (847M params w/ 198 GMAC and 125 MActs @ 256x256 for image)

  * A non-ViT image tower CLIP model (with no previous image tower pretrain) achieving > 79% ImageNet top-1 zero-shot



The models utilize:

  * the UniRepLKNet model (`unireplknet_l`) as the image tower
  * a standard projection at end of image tower
  * a text tower with same size (with 1024, heads 16, depth 24) as ViT-H-14 and ViT-g-14 models


The models are trained at 224x224 image resolution at the beginning 192 epochs, then they're rewinded to 320x320 image resolution in the following 64 epochs. The size of the combined image + text CLIP model is 1.2B params w/ 222 GMAC and 146 MActs. UniRepLKNet excels in the both training and inference efficiency.


Model training done by Ross Wightman across both the [stability.ai](https://stability.ai/) cluster and the [JUWELS Booster](https://apps.fz-juelich.de/jsc/hps/juwels/booster-overview.html) supercomputer. See acknowledgements below.

# Uses

As per the original [OpenAI CLIP model card](https://github.com/openai/CLIP/blob/d50d76daa670286dd6cacf3bcd80b5e4823fc8e1/model-card.md), this model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, arbitrary image classification. We also hope it can be used for interdisciplinary studies of the potential impact of such model. 

The OpenAI CLIP paper includes a discussion of potential downstream impacts to provide an example for this sort of analysis. Additionally, the LAION-5B blog (https://laion.ai/blog/laion-5b/) and upcoming paper include additional discussion as it relates specifically to the training dataset. 

## Direct Use

Zero-shot image classification, image and text retrieval, among others.

## Downstream Use

Image classification and other image task fine-tuning, linear probe image classification, image generation guiding and conditioning, among others.

## Out-of-Scope Use

As per the OpenAI models,

**Any** deployed use case of the model - whether commercial or not - is currently out of scope. Non-deployed use cases such as image search in a constrained environment, are also not recommended unless there is thorough in-domain testing of the model with a specific, fixed class taxonomy. This is because our safety assessment demonstrated a high need for task specific testing especially given the variability of CLIP’s performance with different class taxonomies. This makes untested and unconstrained deployment of the model in any use case currently potentially harmful. 

Certain use cases which would fall under the domain of surveillance and facial recognition are always out-of-scope regardless of performance of the model. This is because the use of artificial intelligence for tasks such as these can be premature currently given the lack of testing norms and checks to ensure its fair use.

Since the model has not been purposefully trained in or evaluated on any languages other than English, its use should be limited to English language use cases.

Further the above notice, the LAION-5B dataset used in training of these models has additional considerations, see below.

# Training Details

## Training Data

This model was trained with LAION-2B -- A 2 billion sample English subset of LAION-5B (https://laion.ai/blog/laion-5b/).

## Training Procedure

The main training run was done at global batch size of 76800 for 256 checkpoint intervals of 45M samples for a total of ~11.5B samples seen over training.


A slurm srun command line below for a 128 8-GPU (40GB A100) configuration:

```

srun --cpu_bind=v --accel-bind=gn python -m training.main \

    --save-frequency 1 \

    --name "large-5b-76800-bf16" \

    --resume "latest" \

    --logs "/runs" \

    --log-every-n-steps 100 \

    --train-data="pipe:aws s3 cp s3://laion5b/laion2B-data/{000000..231349}.tar -" \

    --train-num-samples 45646078 \

    --dataset-type webdataset \

    --warmup 10000 \

    --batch-size=600 \

    --epochs=256 \

    --dataset-resampled \

    --precision amp_bfloat16 \

    --grad-clip-norm 5.0 \

    --lr 5e-4 \

    --workers=6 \

    --beta2 0.99 \

    --model "unireplknet_l" \

    --seed 0 \

    --ddp-static-graph \

    --local-loss \

    --gather-with-grad \

    --grad-checkpointing \

    --report-to "tensorboard"

```


# Evaluation

Evaluation done with code in the [LAION CLIP Benchmark suite](https://github.com/LAION-AI/CLIP_benchmark).


## Results

These models achieve between 75.7 and 76.3 top-1 zero-shot accuracy on ImageNet-1k.


# Citation

**BibTeX:**

UniRepLKNet

```bibtex

@inproceedings{ding2024unireplknet,

  title={UniRepLKNet: A Universal Perception Large-Kernel ConvNet for Audio Video Point Cloud Time-Series and Image Recognition},

  author={Ding, Xiaohan and Zhang, Yiyuan and Ge, Yixiao and Zhao, Sijie and Song, Lin and Yue, Xiangyu and Shan, Ying},

  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},

  pages={5513--5524},

  year={2024}

}

```

LAION-5B
```bibtex

@inproceedings{schuhmann2022laionb,

  title={{LAION}-5B: An open large-scale dataset for training next generation image-text models},

  author={Christoph Schuhmann and

          Romain Beaumont and

          Richard Vencu and

          Cade W Gordon and

          Ross Wightman and

          Mehdi Cherti and

          Theo Coombes and

          Aarush Katta and

          Clayton Mullis and

          Mitchell Wortsman and

          Patrick Schramowski and

          Srivatsa R Kundurthy and

          Katherine Crowson and

          Ludwig Schmidt and

          Robert Kaczmarczyk and

          Jenia Jitsev},

  booktitle={Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track},

  year={2022},

  url={https://openreview.net/forum?id=M3Y74vmsMcY}

}

```

OpenCLIP software
```bibtex

@software{ilharco_gabriel_2021_5143773,

  author       = {Ilharco, Gabriel and

                  Wortsman, Mitchell and

                  Wightman, Ross and

                  Gordon, Cade and

                  Carlini, Nicholas and

                  Taori, Rohan and

                  Dave, Achal and

                  Shankar, Vaishaal and

                  Namkoong, Hongseok and

                  Miller, John and

                  Hajishirzi, Hannaneh and

                  Farhadi, Ali and

                  Schmidt, Ludwig},

  title        = {OpenCLIP},

  month        = jul,

  year         = 2021,

  note         = {If you use this software, please cite it as below.},

  publisher    = {Zenodo},

  version      = {0.1},

  doi          = {10.5281/zenodo.5143773},

  url          = {https://doi.org/10.5281/zenodo.5143773}

}

```

OpenAI CLIP paper


```bibtex

@misc{rw2019timm,

  author = {Ross Wightman},

  title = {PyTorch Image Models},

  year = {2019},

  publisher = {GitHub},

  journal = {GitHub repository},

  doi = {10.5281/zenodo.4414861},

  howpublished = {\url{https://github.com/rwightman/pytorch-image-models}}

}

```