PEFT
Safetensors
English
German
vidore
multimodal_embedding
tattrongvu commited on
Commit
5868eec
·
verified ·
1 Parent(s): 251867d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -5
README.md CHANGED
@@ -37,11 +37,6 @@ Data is the same as the ColPali data described in the paper.
37
 
38
  ## Model Training
39
 
40
- ### Dataset
41
- The dataset was extended from the original colpali train set with the gemini 1.5 flash generated QA on 35k images scraped from internet.
42
-
43
- *Note: Multilingual data is present in the pretraining corpus of the language model and most probably in the multimodal training.*
44
-
45
  ### Parameters
46
  We train models use low-rank adapters ([LoRA](https://arxiv.org/abs/2106.09685))
47
  with `alpha=128` and `r=128` on the transformer layers from the language model,
 
37
 
38
  ## Model Training
39
 
 
 
 
 
 
40
  ### Parameters
41
  We train models use low-rank adapters ([LoRA](https://arxiv.org/abs/2106.09685))
42
  with `alpha=128` and `r=128` on the transformer layers from the language model,