armaggheddon97 commited on
Commit
44a689a
·
verified ·
1 Parent(s): 55e299e

Added model card, running steps, and finetune results

Files changed (1) hide show
  1. README.md +134 -165
README.md CHANGED
@@ -7,199 +7,168 @@ language:
7
  - en
8
  base_model:
9
  - openai/clip-vit-base-patch32
 
10
  ---
11
 
12
- # Model Card for Model ID
13
-
14
- <!-- Provide a quick summary of what the model is/does. -->
15
-
16
-
17
 
18
  ## Model Details
19
 
20
- ### Model Description
21
-
22
- <!-- Provide a longer summary of what this model is. -->
23
-
24
- This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
25
-
26
- - **Developed by:** [More Information Needed]
27
- - **Funded by [optional]:** [More Information Needed]
28
- - **Shared by [optional]:** [More Information Needed]
29
- - **Model type:** [More Information Needed]
30
- - **Language(s) (NLP):** [More Information Needed]
31
- - **License:** [More Information Needed]
32
- - **Finetuned from model [optional]:** [More Information Needed]
33
-
34
- ### Model Sources [optional]
35
-
36
- <!-- Provide the basic links for the model. -->
37
-
38
- - **Repository:** [More Information Needed]
39
- - **Paper [optional]:** [More Information Needed]
40
- - **Demo [optional]:** [More Information Needed]
41
-
42
- ## Uses
43
-
44
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
45
-
46
- ### Direct Use
47
-
48
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
49
-
50
- [More Information Needed]
51
-
52
- ### Downstream Use [optional]
53
-
54
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
55
-
56
- [More Information Needed]
57
-
58
- ### Out-of-Scope Use
59
-
60
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
61
-
62
- [More Information Needed]
63
-
64
- ## Bias, Risks, and Limitations
65
-
66
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
67
-
68
- [More Information Needed]
69
-
70
- ### Recommendations
71
-
72
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
73
-
74
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
75
-
76
- ## How to Get Started with the Model
77
-
78
- Use the code below to get started with the model.
79
-
80
- [More Information Needed]
81
-
82
- ## Training Details
83
-
84
- ### Training Data
85
-
86
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
87
-
88
- [More Information Needed]
89
-
90
- ### Training Procedure
91
-
92
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
93
-
94
- #### Preprocessing [optional]
95
-
96
- [More Information Needed]
97
-
98
-
99
- #### Training Hyperparameters
100
-
101
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
102
-
103
- #### Speeds, Sizes, Times [optional]
104
-
105
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
106
-
107
- [More Information Needed]
108
-
109
- ## Evaluation
110
-
111
- <!-- This section describes the evaluation protocols and provides the results. -->
112
-
113
- ### Testing Data, Factors & Metrics
114
-
115
- #### Testing Data
116
-
117
- <!-- This should link to a Dataset Card if possible. -->
118
-
119
- [More Information Needed]
120
-
121
- #### Factors
122
-
123
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
124
-
125
- [More Information Needed]
126
-
127
- #### Metrics
128
-
129
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
130
-
131
- [More Information Needed]
132
-
133
- ### Results
134
-
135
- [More Information Needed]
136
-
137
- #### Summary
138
-
139
-
140
-
141
- ## Model Examination [optional]
142
-
143
- <!-- Relevant interpretability work for the model goes here -->
144
-
145
- [More Information Needed]
146
-
147
- ## Environmental Impact
148
-
149
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
150
 
151
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
 
152
 
153
- - **Hardware Type:** [More Information Needed]
154
- - **Hours used:** [More Information Needed]
155
- - **Cloud Provider:** [More Information Needed]
156
- - **Compute Region:** [More Information Needed]
157
- - **Carbon Emitted:** [More Information Needed]
158
 
159
- ## Technical Specifications [optional]
 
 
 
 
160
 
161
- ### Model Architecture and Objective
 
 
 
 
 
 
162
 
163
- [More Information Needed]
 
 
 
 
 
164
 
165
- ### Compute Infrastructure
 
 
 
 
 
166
 
167
- [More Information Needed]
 
 
 
 
168
 
169
- #### Hardware
 
 
170
 
171
- [More Information Needed]
 
 
172
 
173
- #### Software
 
 
 
 
 
 
 
174
 
175
- [More Information Needed]
176
 
177
- ## Citation [optional]
 
 
 
 
178
 
179
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
180
 
181
- **BibTeX:**
 
182
 
183
- [More Information Needed]
 
 
 
 
 
 
 
 
184
 
185
- **APA:**
186
 
187
- [More Information Needed]
 
188
 
189
- ## Glossary [optional]
 
 
 
190
 
191
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
 
 
 
 
 
192
 
193
- [More Information Needed]
194
 
195
- ## More Information [optional]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
196
 
197
- [More Information Needed]
 
 
 
 
 
198
 
199
- ## Model Card Authors [optional]
 
 
 
200
 
201
- [More Information Needed]
 
 
 
202
 
203
- ## Model Card Contact
 
 
 
 
 
 
 
 
 
 
204
 
205
- [More Information Needed]
 
7
  - en
8
  base_model:
9
  - openai/clip-vit-base-patch32
10
+ pipeline_tag: zero-shot-classification
11
  ---
12
 
13
+ # Model Card for clip-vit-base-patch32_lego-minifigures
 
 
 
 
14
 
15
  ## Model Details
16
 
17
+ This model is a finetuned version of the `openai/clip-vit-hbase-patch32` CLIP (Contrastive Language-Image Pretraining) model on the [`lego_minifigure_captions](https://huggingface.co/datasets/armaggheddon97/lego_minifigure_captions), specialized for matching images of Lego minifigures with their corresponding textual description.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
18
 
19
+ > [!NOTE]
20
+ > If you are interested on the code used refer to the finetuning script on my [GitHub](https://github.com/Armaggheddon/BricksFinder/blob/main/model_finetuning/src/minifig_finetune.py)
21
 
22
+ ## Model Description
 
 
 
 
23
 
24
+ - **Developed by:** The base model has been developed by OpenAI and the finetuned model has been developed by me, [armaggheddon97](https://huggingface.co/armaggheddon97).
25
+ - **Model type:** The model is a CLIP (Contrastive Language-Image Pretraining) model.
26
+ - **Language:** The model is expects English text as input.
27
+ - **License:** The model is licensed under the MIT license.
28
+ - **Finetuned from model clip-vit-base-patch32:** The model is a finetuned version of the `openai/clip-vit-base-patch32` model on the `lego_minifigure_captions` dataset. The model has been finetuned for 7 epochs on a 80-20 train-validation split of the dataset. For more details on the finetune script take a look at the code on my [GitHub](https://github.com/Armaggheddon/BricksFinder/blob/main/model_finetuning/src/minifig_finetune.py).
29
 
30
+ ## Usage with 🤗 transformers
31
+ - Load the model and processor using the following code snippet:
32
+ ```python
33
+ import torch
34
+ from transformers import CLIPProcessor, CLIPModel
35
+
36
+ device = "cuda" if torch.cuda.is_available() else "cpu"
37
 
38
+ model = CLIPModel.from_pretrained("armaggheddon97/clip-vit-base-patch32_lego-minifigures", device_map="auto").to(device)
39
+ processor = CLIPProcessor.from_pretrained("armaggheddon97/clip-vit-base-patch32_lego-minifigures", device_map="auto").to(device)
40
+ ```
41
+ - Using `Auto` classes:
42
+ ```python
43
+ from transformers import AutoModelForZeroShotImageClassification, AutoProcessor
44
 
45
+ model = AutoModelForZeroShotImageClassification.from_pretrained("armaggheddon97/clip-vit-base-patch32_lego-minifigures")
46
+ processor = AutoProcessor.from_pretrained("armaggheddon97/clip-vit-base-patch32_lego-minifigures")
47
+ ```
48
+ - Using with `pipeline`:
49
+ ```python
50
+ from transformers import pipeline
51
 
52
+ model = "armaggheddon97/clip-vit-base-patch32_lego-minifigures"
53
+ clip_classifier = pipeline("zero-shot-image-classification", model=model)
54
+ ```
55
+
56
+ ## Load in float16 precision
57
 
58
+ The provided model is in float32 precision. To load the model in float16 precision to speed up inference, you can use the following code snippet:
59
+ ```python
60
+ from transformers import CLIPProcessor, CLIPModel
61
 
62
+ model = CLIPModel.from_pretrained("armaggheddon97/clip-vit-base-patch32_lego-minifigures", dtype=torch.float16)
63
+ processor = CLIPProcessor.from_pretrained("armaggheddon97/clip-vit-base-patch32_lego-minifigures")
64
+ ```
65
 
66
+ or alternatively using `torch` directly with:
67
+ ```python
68
+ import torch
69
+ from transformers import CLIPModel
70
+
71
+ model = CLIPModel.from_pretrained("armaggheddon97/clip-vit-base-patch32_lego-minifigures")
72
+ model_fp16 = model.to(torch.float16)
73
+ ```
74
 
75
+ ## Use cases
76
 
77
+ ### Generating embedding
78
+ - To embed only the text:
79
+ ```python
80
+ import torch
81
+ from transformers import CLIPTokenizerFast, CLIPModel
82
 
83
+ device = "cuda" if torch.cuda.is_available() else "cpu"
84
 
85
+ model = CLIPModel.from_pretrained("armaggheddon97/clip-vit-base-patch32_lego-minifigures", device_map="auto").to(device)
86
+ tokenizer = CLIPTokenizerFast.from_pretrained("armaggheddon97/clip-vit-base-patch32_lego-minifigures")
87
 
88
+ text = ["a photo of a lego minifigure"]
89
+ tokens = tokenizer(text, return_tensors="pt", padding=True).to(device)
90
+ outputs = model.get_text_features(**tokens)
91
+ ```
92
+ - To embed only the image:
93
+ ```python
94
+ import torch
95
+ from PIL import Image
96
+ from transformers import CLIPProcessor, CLIPModel
97
 
98
+ device = "cuda" if torch.cuda.is_available() else "cpu"
99
 
100
+ model = CLIPModel.from_pretrained("armaggheddon97/clip-vit-base-patch32_lego-minifigures", device_map="auto").to(device)
101
+ processor = CLIPProcessor.from_pretrained("armaggheddon97/clip-vit-base-patch32_lego-minifigures", device_map="auto").to(device)
102
 
103
+ image = Image.open("path_to_image.jpg")
104
+ inputs = processor(images=image, return_tensors="pt").to(device)
105
+ outputs = model.get_image_features(**inputs)
106
+ ```
107
 
108
+ ### Zero-shot image classification
109
+ ```python
110
+ import torch
111
+ from PIL import Image
112
+ from transformers import CLIPProcessor, CLIPModel
113
+ from datasets import load_dataset
114
 
115
+ device = "cuda" if torch.cuda.is_available() else "cpu"
116
 
117
+ model = CLIPModel.from_pretrained("armaggheddon97/clip-vit-base-patch32_lego-minifigures", device_map="auto").to(device)
118
+ processor = CLIPProcessor.from_pretrained("armaggheddon97/clip-vit-base-patch32_lego-minifigures", device_map="auto").to(device)
119
+
120
+ dataset = load_dataset("armaggheddon97/lego_minifigure_captions", split="test")
121
+
122
+ captions = [
123
+ "a photo of a lego minifigure with a t-shirt with a pen holder",
124
+ "a photo of a lego minifigure with green pants",
125
+ "a photo of a lego minifigure with a red cap",
126
+ ]
127
+ image = dataset[0]["image"]
128
+
129
+ inputs = processor(text=captions, images=image, return_tensors="pt", padding=True).to(device)
130
+ outputs = model(**inputs)
131
+
132
+ logits_per_image = outputs.logits_per_image
133
+ probabilities = logits_per_image.softmax(dim=1)
134
+ max_prob_idx = torch.argmax(logits_per_image, dim=1)
135
+ ```
136
+
137
+ ## Results
138
+ The goal was to obtain a model that could more accurately distinguish minifigure images based on their textual description. On this regard, in terms of accuracy, both models perform similarly. However, when testing on a classification task, with the code in the [Zero-shot image classification](#zero-shot-image-classification) section, the finetuned model is able to more accurately classify the images with a much greater level of confidence. For example when testing the model with the following inputs:
139
+ - `a photo of a lego minifigure with a t-shirt with a pen holder`
140
+ - `a photo of a lego minifigure with green pants`
141
+ - `a photo of a lego minifigure with a red cap`
142
+
143
+ and using as input image the following:
144
 
145
+ ![image](./images/zero_shot_sample_image.png)
146
+
147
+ The finetuned model outputs:
148
+ - **99.76%**: "a photo of a lego minifigure with a t-shirt with a pen holder"
149
+ - **0.10%**: "a photo of a lego minifigure with green pants"
150
+ - **0.13%**: "a photo of a lego minifigure with a red cap"
151
 
152
+ while the base model for the same inputs gives:
153
+ - **44.14%**: "a photo of a lego minifigure with a t-shirt with a pen holder"
154
+ - **24.36%**: "a photo of a lego minifigure with green pants"
155
+ - **31.50%**: "a photo of a lego minifigure with a red cap"
156
 
157
+ That shows how the finetuned model is able to more accurately classify the images based on their textual description.
158
+
159
+ Running the same task across the whole dataset with 1 correct caption (always the first) and 2 randomly sampled ones, results in the following metrics:
160
+ ![results](./images/model_caption_compare.png)
161
 
162
+ The plot visualizes the **normalized text logits** produced by the finetuned and base models:
163
+ - **Input:** For each sample, an image of a Lego minifigure was taken, along three captions:
164
+ - The **correct caption** that matches the image (in position 0).
165
+ - Two **randomly sampled, incorrect captions** (in position 1 and 2).
166
+ - **Output:** The model generated text logits for each of the captions, reflecting similarity between the image embedding and each caption embedding. These logits were then normalized for easier visualization.
167
+ - **Heatmap Visualization:** The normalized logits are displayed as a heatmap where:
168
+ - Each **row** represents a different input sample
169
+ - Each **column** represents one of the three captions: the correct one (0, first row), and two of the random ones (1 and 2, second and third rows) for a given sample image.
170
+ - The **color intensity** represents the normalized logit score assigned to each caption by the model, with darker colors indicating higher scores and this confidence (i.e. the larger the contrast between the first row with the second and third, the better the results).
171
+
172
+ The **base model** (right), as expected, did not show high confidence in any of the classes, showing poor discrimination capability for the image and text samples, also highligted by a much smaller variation between the scores for the labels. However, in terms of accuracy, it is still able to correctly assign the correct caption on 99.98% of the samples.
173
 
174
+ The **finetuned model** (left) shows a much higher confidence in the correct caption, with a clear distinction between the correct and incorrect captions. This is reflected in the higher scores assigned to the correct caption, and the lower ones assigned to the incorrect captions. In terms of accuracy, the finetuned model shows similar results, but are slightly lower than the base model, with an accuracy of 99.39%.