ayushman72
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,19 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
# Image Captioning using ViT and GPT2 architecture
|
2 |
|
3 |
This is my attempt to make a transformer model which takes image as the input and provides a caption for the image
|
@@ -47,4 +63,4 @@ As we can see these are not the most amazing predictions. The performance could
|
|
47 |
|
48 |
Check the [full notebook](./imagecaptioning.ipynb) or [Kaggle](https://www.kaggle.com/code/ayushman72/imagecaptioning)
|
49 |
|
50 |
-
Download the [weights](https://drive.google.com/file/d/1X51wAI7Bsnrhd2Pa4WUoHIXvvhIcRH7Y/view?usp=drive_link) of the model
|
|
|
1 |
+
---
|
2 |
+
language:
|
3 |
+
- en
|
4 |
+
metrics:
|
5 |
+
- bleu
|
6 |
+
- meteor
|
7 |
+
base_model:
|
8 |
+
- openai-community/gpt2
|
9 |
+
library_name: transformers
|
10 |
+
tags:
|
11 |
+
- image captioing
|
12 |
+
- vit
|
13 |
+
- gpt
|
14 |
+
- gpt2
|
15 |
+
- torch
|
16 |
+
---
|
17 |
# Image Captioning using ViT and GPT2 architecture
|
18 |
|
19 |
This is my attempt to make a transformer model which takes image as the input and provides a caption for the image
|
|
|
63 |
|
64 |
Check the [full notebook](./imagecaptioning.ipynb) or [Kaggle](https://www.kaggle.com/code/ayushman72/imagecaptioning)
|
65 |
|
66 |
+
Download the [weights](https://drive.google.com/file/d/1X51wAI7Bsnrhd2Pa4WUoHIXvvhIcRH7Y/view?usp=drive_link) of the model
|