BUAADreamer commited on
Commit
31d6957
·
verified ·
1 Parent(s): b427e1c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +263 -1
README.md CHANGED
@@ -18,4 +18,266 @@ language:
18
  - es
19
  metrics:
20
  - recall
21
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
18
  - es
19
  metrics:
20
  - recall
21
+ ---
22
+
23
+ # CCRK: Improving the Consistency in Cross-Lingual Cross-Modal Retrieval with 1-to-K Contrastive Learning
24
+
25
+ [![license](https://img.shields.io/github/license/mashape/apistatus.svg?maxAge=2592000)](https://github.com/BUAADreamer/CCRK/blob/main/licence)
26
+ [![arxiv badge](https://img.shields.io/badge/arxiv-2406.18254-red)](https://arxiv.org/abs/2406.18254)
27
+ [![Pytorch](https://img.shields.io/badge/PyTorch-%23EE4C2C.svg?e&logo=PyTorch&logoColor=white)](https://pytorch.org/)
28
+
29
+ [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/improving-the-consistency-in-cross-lingual/zero-shot-cross-lingual-text-to-image-1)](https://paperswithcode.com/sota/zero-shot-cross-lingual-text-to-image-1?p=improving-the-consistency-in-cross-lingual)
30
+ [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/improving-the-consistency-in-cross-lingual/zero-shot-cross-lingual-image-to-text-1)](https://paperswithcode.com/sota/zero-shot-cross-lingual-image-to-text-1?p=improving-the-consistency-in-cross-lingual)
31
+ [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/improving-the-consistency-in-cross-lingual/zero-shot-cross-lingual-text-to-image)](https://paperswithcode.com/sota/zero-shot-cross-lingual-text-to-image?p=improving-the-consistency-in-cross-lingual)
32
+ [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/improving-the-consistency-in-cross-lingual/zero-shot-cross-lingual-image-to-text)](https://paperswithcode.com/sota/zero-shot-cross-lingual-image-to-text?p=improving-the-consistency-in-cross-lingual)
33
+
34
+ >Cross-lingual Cross-modal Retrieval (CCR) is an essential task in web search, which aims to break the barriers between modality and language simultaneously and achieves image-text retrieval in the multi-lingual scenario with a single model. In recent years, excellent progress has been made based on cross-lingual cross-modal pre-training; particularly, the methods based on contrastive learning on large-scale data have significantly improved retrieval tasks. However, these methods directly follow the existing pre-training methods in the cross-lingual or cross-modal domain, leading to two problems of inconsistency in CCR: The methods with cross-lingual style suffer from the intra-modal error propagation, resulting in inconsistent recall performance across languages in the whole dataset. The methods with cross-modal style suffer from the inter-modal optimization direction bias, resulting in inconsistent rank across languages within each instance, which cannot be reflected by Recall@K. To solve these problems, we propose a simple but effective 1-to-K contrastive learning method, which treats each language equally and eliminates error propagation and optimization bias. In addition, we propose a new evaluation metric, Mean Rank Variance (MRV), to reflect the rank inconsistency across languages within each instance. Extensive experiments on four CCR datasets show that our method improves both recall rates and MRV with smaller-scale pre-trained data, achieving the new state-of-art.
35
+
36
+ <div align="center">
37
+ <img src="https://github.com/BUAADreamer/CCRK/blob/master/pics/overview.png" width="95%" height="auto" />
38
+ <img src="https://github.com/BUAADreamer/CCRK/blob/master/pics/result.png" width="95%" height="auto" />
39
+ </div>
40
+
41
+ ## Requirements
42
+
43
+ - Install python3 environment
44
+
45
+ ```shell
46
+ conda create -n ccrk python=3.8.8
47
+ conda activate ccrk
48
+ pip3 install -r requirements.txt
49
+ ```
50
+
51
+ ## Checkpoints
52
+ We pretrain the model for only 30 epochs on 2 A100 GPUs. The batch size is set to 128.
53
+
54
+ | Checkpoint | Pretrain Dataset |
55
+ |:---------------------------------------------------------------------------------------------------:| :----------------------: |
56
+ | [CCR-10-2M-30epoch](https://huggingface.co/BUAADreamer/CCRK/resolve/main/ccrk_2m_10lan_epoch_29.th) | `CC2M 10lan` |
57
+ | [CCR-10-3M-30epoch](https://huggingface.co/BUAADreamer/CCRK/resolve/main/ccrk_3m_10lan_epoch_29.th) | `CC2M+COCO+VG+SBU 10lan` |
58
+
59
+ **Notes**:
60
+
61
+ * `2M` : `CC2M`
62
+ * `3M` : `CC2M+SBU+VG+COCO` , SBU/VG/COCO splits are borrowed from CCLM.
63
+ * `CC2M` : Because of many broken links, we only collect 1863804 images of Conceptual Captions Dataset.
64
+ * `10lan` : `zh,en,de,fr,ja,cs,id,es,ru,tr`
65
+
66
+ ## data
67
+
68
+ - Download data from corresponding websites
69
+ - If running pre-training scripts:
70
+ - download pre-trained models for parameter initialization
71
+ - image encoder: [swin-transformer-base](https://github.com/SwinTransformer/storage/releases/download/v1.0.0/swin_base_patch4_window7_224_22k.pth)
72
+ - text encoder: [xlm-roberta-large](https://huggingface.co/xlm-roberta-large)
73
+ - Organize these files like this:
74
+
75
+ ```
76
+ CCRK/
77
+ data/
78
+ xlm-roberta-large/...
79
+ swin_base_patch4_window7_224_22k.pth
80
+ finetune/
81
+ mscoco/...
82
+ multi30k/...
83
+
84
+ iglue/
85
+ datasets/...
86
+
87
+ images/
88
+ flickr30k-images/*.jpg
89
+ coco/
90
+ train2014/*.jpg
91
+ val2014/*.jpg
92
+ test2015/*.jpg
93
+ image_data_train/
94
+ image_pixels/*.csv
95
+ wit_test/
96
+ *.csv
97
+
98
+ pretrain_data/
99
+ translated_4M/
100
+ cc3m-mm-data-all/
101
+ part_*.data
102
+ vg-mm-data-all/
103
+ part_*.data
104
+ coco-mm-data-all/
105
+ part_*.data
106
+ cc3m-mm-data-all/
107
+ part_*.data
108
+ ccrk_2m_10lan_epoch_29.th
109
+ ccrk_3m_10lan_epoch_29.th
110
+ ```
111
+
112
+ ## Pretrain
113
+
114
+ ```shell
115
+ # CCRK 2M 6lan
116
+ python3 run.py --task "pretrain" --dist "1" --output_dir "output/CCRK-2m-6lan" --seed 42 --config configs/Pretrain_2m.yaml --pret_para "--language_chosen zh,ja,en,de,fr,cs" --device "c2"
117
+
118
+ # CCRK 2M 10lan
119
+ python3 run.py --task "pretrain" --dist "1" --output_dir "output/CCRK-2m-10lan" --seed 42 --config configs/Pretrain_2m.yaml --device "c2"
120
+
121
+ # CCRK 3M 6lan
122
+ python3 run.py --task "pretrain" --dist "1" --output_dir "output/CCRK-3m-6lan" --seed 42 --config configs/Pretrain_3m.yaml --pret_para "--language_chosen zh,ja,en,de,fr,cs" --device "c2"
123
+
124
+ # CCRK 3M 10lan
125
+ python3 run.py --task "pretrain" --dist "1" --output_dir "output/CCRK-3m-10lan" --seed 42 --config configs/Pretrain_3m.yaml --device "c2"
126
+ ```
127
+
128
+ For distributed training across nodes, see run.py for more details.
129
+
130
+ ### Data
131
+
132
+ To facilitate research on multi-lingual multi-modal pre-training, we provide the text translation of [`COCO+VG+SBU+CC3M`](https://drive.google.com/drive/folders/1lkRMFKSdz9bXhpB0n8eELF0ztbVmcBp6?usp=share_link), which contains 10 language: `zh/en/de/fr/ja/cs/id/tr/ru/es`
133
+
134
+ If you want to translate more languages, please refer to `translation/README.md`
135
+
136
+ **Please cite the corresponding papers appropriately and download the images from their websites.**
137
+
138
+ For more details, please read the code dataset/pretrain_dataset_multilingual.py (more specifically ImageMultiTextDataset) to see what format is needed.
139
+
140
+ ## Finetune
141
+
142
+ We finetune the model for every dataset on 4 V100 GPUs.
143
+
144
+ ### Data: MSCOCO and Multi30K
145
+
146
+ Please download MSCOCO, Multi30K from the corresponding websites. We provide some links for reference.
147
+
148
+ - MSCOCO
149
+
150
+ - ja https://github.com/yahoojapan/YJCaptions
151
+ - en https://cs.stanford.edu/people/karpathy/deepimagesent/caption_datasets.zip
152
+ - zh https://github.com/li-xirong/coco-cn
153
+
154
+ * Multi30k
155
+ * [https://github.com/multi30k/dataset](https://github.com/multi30k/dataset)
156
+
157
+ For these two datasets, you need to additionally reformulate the train json files like this:
158
+
159
+ ```json
160
+ [
161
+ {
162
+ "caption": "A woman wearing a net on her head cutting a cake. ",
163
+ "image": "coco/val2014/COCO_val2014_000000522418.jpg",
164
+ "image_id": 522418
165
+ }, ...
166
+ ]
167
+ ```
168
+
169
+ and the valid and test files like this:
170
+
171
+ ```json
172
+ [
173
+ {
174
+ "image": "coco/val2014/COCO_val2014_000000391895.jpg",
175
+ "caption": [
176
+ "A man with a red helmet on a small moped on a dirt road. ",
177
+ "Man riding a motor bike on a dirt road on the countryside.",
178
+ "A man riding on the back of a motorcycle.",
179
+ "A dirt path with a young person on a motor bike rests to the foreground of a verdant area with a bridge and a background of cloud-wreathed mountains. ",
180
+ "A man in a red shirt and a red hat is on a motorcycle on a hill side."
181
+ ],
182
+ "image_id": 391895
183
+ }, ...
184
+ ]
185
+ ```
186
+
187
+ ------
188
+
189
+ ### Data: IGLUE
190
+
191
+ For IGLUE, you just need to clone [this repo](https://github.com/e-bug/iglue) and place it in the root path of our repo as follows. Our code works on the original annotations of IGLUE without any preprocess.
192
+
193
+ ```
194
+ CCRK/
195
+ iglue/
196
+ datasets/...
197
+ ```
198
+
199
+ For WIT, please download the `image_data_train.tar` and test images from its [kaggle](https://www.kaggle.com/c/wikipedia-image-caption/data) webpage, and extract them to `images` , `images/wit_test` seperately.
200
+
201
+ Tips for WIT:
202
+
203
+ - The download link of `image_data_train.tar` is in **Data Description**.
204
+ - You need to extract the files again in `images/image_data_train/image_pixels` and `iglue/datasets/wit/annotations/train_en.jsonl.zip`)
205
+
206
+ ------
207
+
208
+ ### Retrieval Tasks: Multi30K and MSCOCO
209
+
210
+ ```shell
211
+ # English-only Fine-tune
212
+ ## Multi30K
213
+ python3 run.py --dist 1 --task itr_multi30k --config configs/cclm-base-ft/Retrieval_multi30k_en_ft.yaml --output_dir output/multi30k --bs 64 --seed 42 --epoch 10 --checkpoint ../pretrain_data/ccrk_3m_10lan_epoch_29.th --device "c4"
214
+
215
+ ## MSCOCO
216
+ python3 run.py --dist 1 --task itr_coco --config configs/cclm-base-ft/Retrieval_coco_en_ft.yaml --output_dir output/mscoco --bs 64 --seed 42 --epoch 10 --checkpoint ../pretrain_data/ccrk_3m_10lan_epoch_29.th --device "c4"
217
+ ## split train and test to speed up
218
+ python3 run.py --dist 1 --task itr_coco --config configs/cclm-base-ft/Retrieval_coco_en_ft.yaml --output_dir output/mscoco --bs 64 --seed 42 --epoch 10 --checkpoint output/mscoco/checkpoint_best.pth --device "c4" --evaluate
219
+
220
+ # Single-Language Fine-tune
221
+ ## Multi30K, optional language: cs/de/fr
222
+ python3 run.py --dist 1 --task itr_multi30k --config configs/cclm-base-ft/Retrieval_multi30k_cs_ft.yaml --output_dir output/multi30k/cs --bs 64 --seed 42 --epoch 10 --checkpoint output/multi30k/checkpoint_best.pth --device "c4"
223
+
224
+ ## MSCOCO, optional config: ja/zh
225
+ python3 run.py --dist 1 --task itr_coco --config configs/cclm-base-ft/Retrieval_coco_ja_ft.yaml --output_dir output/mscoco/zh --bs 64 --seed 42 --epoch 10 --checkpoint output/mscoco/checkpoint_best.pth --device "c4"
226
+ ```
227
+
228
+ ------
229
+
230
+ ### IGLUE: Zero-Shot
231
+
232
+ We provide examples of fine-tuning on English train set and evaluating on the test sets of other languages.
233
+
234
+ ```shell
235
+ # xFlickr&CO
236
+ python3 run.py --dist 1 --task xflickrco --output_dir output/xflickrco --checkpoint ../pretrain_data/ccrk_3m_10lan_epoch_29.th --bs 64 --seed 42 --device "c4"
237
+
238
+ # WIT
239
+ python3 run.py --dist 1 --task wit --output_dir output/wit --bs 80 --seed 42 --checkpoint ../pretrain_data/ccrk_3m_10lan_epoch_29.th --device "c4"
240
+ ```
241
+
242
+ ------
243
+
244
+ ### IGLUE: Few-Shot
245
+
246
+ We also evaluate CCLM on IGLUE max-shot settings. **Note** that you need to finetune the pretrained model on English first, then load the checkpoints for few-shot learning.
247
+
248
+ ```shell
249
+ # xFlickr&CO, optional language: de/es/id/ja/ru/tr/zh
250
+ python3 run.py --dist 1 --task xflickrco --output_dir output/xflickrco/zh --checkpoint output/xflickrco/checkpoint_best.pth --bs 64 --seed 42 --fewshot de,100 --lr 1e-6 --device "c4"
251
+ ```
252
+
253
+ The value after language in `--fewshot` settings of xFlickr&CO is the number of few-shot samples, where we always use the maximum values.
254
+
255
+ ### MRV
256
+
257
+ After all language of models are finetuned, we can evaluate MRV using following command:
258
+
259
+ ```shell
260
+ python3 run.py --task xflickrco --dist "1" --output_dir output/xflickrco/test --bs 64 --seed 42 --device "c4" --checkpoint output/xflickrco/checkpoint_best.pth --evaluate --fewshot de,100 --ft_para " --model_cap ccrk_3m_10lan --checkpoint_fmt output/xflickrco/format/checkpoint_best.pth"
261
+ ```
262
+
263
+ Specific implementation of MRV can be found in the `analysis_ranks` function of `xFlickrCO.py`. We calculate mean rank variation of four models `en,de,ja,zj`.
264
+
265
+ ## Citation
266
+ If this work is helpful, please kindly cite as:
267
+
268
+ ```bibtex
269
+ @article{nie2024improving,
270
+ title={Improving the Consistency in Cross-Lingual Cross-Modal Retrieval with 1-to-K Contrastive Learning},
271
+ author={Nie, Zhijie and Zhang, Richong and Feng, Zhangchi and Huang, Hailang and Liu, Xudong},
272
+ journal={arXiv preprint arXiv:2406.18254},
273
+ year={2024}
274
+ }
275
+ ```
276
+
277
+ ## Acknowledgement
278
+
279
+ About code, Our project is based on [CCLM](https://github.com/zengyan-97/CCLM).
280
+
281
+ About pretraining datasets, `zh,ja,de,fr,cs` texts in `cc3m` are translated by [UC2](https://github.com/zmykevin/UC2) while `zh,ja,de,fr,cs` texts in `sbu/coco/vg` are translated by [CCLM](https://github.com/zengyan-97/CCLM). For other languages `id,es,ru,tr`, we use `m2m_100_1.2B` model developed by [Meta AI](https://ai.facebook.com/research/) and [EasyNMT](https://github.com/UKPLab/EasyNMT) as a tool to translate all datasets from English.
282
+
283
+ Thanks for their great jobs!