Update model description
Browse files
README.md
CHANGED
@@ -33,7 +33,9 @@ It achieves the following results on the evaluation set:
|
|
33 |
|
34 |
## Model description
|
35 |
|
36 |
-
|
|
|
|
|
37 |
|
38 |
## Intended uses & limitations
|
39 |
|
|
|
33 |
|
34 |
## Model description
|
35 |
|
36 |
+
LXMERT is a transformer model for learning vision-and-language cross-modality representations. It has a Transformer model that has three encoders: object relationship encoder, a language encoder, and a cross-modality encoder. It is pretrained via a combination of masked language modeling, visual-language text alignment, ROI-feature regression, masked visual-attribute modeling, masked visual-object modeling, and visual-question answering objectives. It acheives the state-of-the-art results on VQA anad GQA.
|
37 |
+
|
38 |
+
Paper link : [LXMERT: Learning Cross-Modality Encoder Representations from Transformers](https://arxiv.org/pdf/1908.07490.pdf)
|
39 |
|
40 |
## Intended uses & limitations
|
41 |
|