espejelomar commited on
Commit
99a5c20
·
1 Parent(s): d2f55a9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +162 -3
README.md CHANGED
@@ -1,7 +1,166 @@
1
  ---
 
2
  tags:
3
- - image-classification
4
- library_name: generic
 
 
 
5
  ---
6
 
7
- This is a preview trial in Md
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language: en
3
  tags:
4
+ - exbert
5
+ license: apache-2.0
6
+ datasets:
7
+ - bookcorpus
8
+ - wikipedia
9
  ---
10
 
11
+ # Pet breeds classification model
12
+
13
+ Finetuned model on The Oxford-IIIT Pet Dataset. It was introduced in
14
+ [this paper](https://www.robots.ox.ac.uk/~vgg/publications/2012/parkhi12a/) and first released in
15
+ [this webpage](https://www.robots.ox.ac.uk/~vgg/data/pets/).
16
+
17
+ The pretrained model was trained on the ImageNet dataset, a dataset that has 100,000+ images across 200 different classes. It was introduced in [this paper](https://image-net.org/static_files/papers/imagenet_cvpr09.pdf) and available [in this webpage](https://image-net.org/download.php)
18
+
19
+ Disclaimer: The model was fine-tuned after [Chapter 5](https://github.com/fastai/fastbook/blob/master/05_pet_breeds.ipynb) of [Deep Learning for Coders with Fastai and Pytorch: AI Applications Without a PhD (2020)](https://github.com/fastai/fastbook) written by Jeremy Howard and Sylvain Gugger.
20
+
21
+ ## Model description
22
+
23
+ The model was finetuned using the `cnn_learner` method of the fastai library suing a Resnet 34 backbone pretrained on the ImageNet dataset. The fastai library uses PyTorch for the undelying operations. `cnn_learner` automatically gets a pretrained model from a given architecture with a custom head that is suitable for the target data.
24
+
25
+ Resnet34 is a 34 layer convolutional neural network. It takes residuals from each layer and uses them in the subsequent connected layers.
26
+
27
+ Specifically the model was obtained:
28
+
29
+ ```
30
+ learn = cnn_learner(dls, resnet34, metrics=error_rate)
31
+ learn.fine_tune(2)
32
+ ```
33
+
34
+ BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
35
+ was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
36
+ publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
37
+ was pretrained with two objectives:
38
+
39
+ - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
40
+ the entire masked sentence through the model and has to predict the masked words. This is different from traditional
41
+ recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
42
+ GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
43
+ sentence.
44
+ - Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
45
+ they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
46
+ predict if the two sentences were following each other or not.
47
+
48
+ This way, the model learns an inner representation of the English language that can then be used to extract features
49
+ useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
50
+ classifier using the features produced by the BERT model as inputs.
51
+
52
+ ## Intended uses & limitations
53
+
54
+ You can use the model to further fine-tune tasks that might be related to classifying animals; however, note that this model is primarily intended to illustrate the ease of integrating fastai-trained models into the HuggingFace Hub. For pretrained image classification models, see the [HuggingFace Hub](https://huggingface.co/models?pipeline_tag=image-classification&sort=downloads) and from the task menu select Image Classification.
55
+
56
+ ### How to use
57
+
58
+ You can use this model directly with a pipeline for masked language modeling:
59
+
60
+ ```python
61
+ >>> from transformers import pipeline
62
+ >>> unmasker = pipeline('fill-mask', model='bert-base-cased')
63
+ >>> unmasker("Hello I'm a [MASK] model.")
64
+
65
+ [{'sequence': "[CLS] Hello I'm a fashion model. [SEP]",
66
+ 'score': 0.09019174426794052,
67
+ 'token': 4633,
68
+ 'token_str': 'fashion'},
69
+ {'sequence': "[CLS] Hello I'm a new model. [SEP]",
70
+ 'score': 0.06349995732307434,
71
+ 'token': 1207,
72
+ 'token_str': 'new'},
73
+ {'sequence': "[CLS] Hello I'm a male model. [SEP]",
74
+ 'score': 0.06228214129805565,
75
+ 'token': 2581,
76
+ 'token_str': 'male'},
77
+ {'sequence': "[CLS] Hello I'm a professional model. [SEP]",
78
+ 'score': 0.0441727414727211,
79
+ 'token': 1848,
80
+ 'token_str': 'professional'},
81
+ {'sequence': "[CLS] Hello I'm a super model. [SEP]",
82
+ 'score': 0.03326151892542839,
83
+ 'token': 7688,
84
+ 'token_str': 'super'}]
85
+ ```
86
+
87
+ Here is how to use this model to get the features of a given text in PyTorch:
88
+
89
+ ```python
90
+ from transformers import BertTokenizer, BertModel
91
+ tokenizer = BertTokenizer.from_pretrained('bert-base-cased')
92
+ model = BertModel.from_pretrained("bert-base-cased")
93
+ text = "Replace me by any text you'd like."
94
+ encoded_input = tokenizer(text, return_tensors='pt')
95
+ output = model(**encoded_input)
96
+ ```
97
+
98
+ and in TensorFlow:
99
+
100
+ ```python
101
+ from transformers import BertTokenizer, TFBertModel
102
+ tokenizer = BertTokenizer.from_pretrained('bert-base-cased')
103
+ model = TFBertModel.from_pretrained("bert-base-cased")
104
+ text = "Replace me by any text you'd like."
105
+ encoded_input = tokenizer(text, return_tensors='tf')
106
+ output = model(encoded_input)
107
+ ```
108
+
109
+ ## Training data
110
+
111
+ The Resnet34 model was pretrained on [ImageNet](https://image-net.org/static_files/papers/imagenet_cvpr09.pdf), a dataset that has 100,000+ images across 200 different classes, and fine-tuned on [The Oxford-IIIT Pet Dataset](https://www.robots.ox.ac.uk/~vgg/data/pets/).
112
+
113
+ ## Preprocessing
114
+
115
+ For more detailed information on the preprocessing procedure, refer to the [Chapter 5](https://github.com/fastai/fastbook/blob/master/05_pet_breeds.ipynb) of [Deep Learning for Coders with Fastai and Pytorch: AI Applications Without a PhD (2020)](https://github.com/fastai/fastbook).
116
+
117
+ Two main strategies are followed to presizing the images:
118
+
119
+ - Resize images to relatively "large" dimensions—that is, dimensions significantly larger than the target training dimensions.
120
+ - Compose all of the common augmentation operations (including a resize to the final target size) into one, and perform the combined operation on the GPU only once at the end of processing, rather than performing the operations individually and interpolating multiple times.
121
+
122
+ "The first step, the resize, creates images large enough that they have spare margin to allow further augmentation transforms on their inner regions without creating empty zones. This transformation works by resizing to a square, using a large crop size. On the training set, the crop area is chosen randomly, and the size of the crop is selected to cover the entire width or height of the image, whichever is smaller.
123
+
124
+ In the second step, the GPU is used for all data augmentation, and all of the potentially destructive operations are done together, with a single interpolation at the end." ([Howard and Gugger, 2020](https://github.com/fastai/fastbook))
125
+
126
+ Specifically, the following code is used for preprocessing:
127
+
128
+ ```python
129
+ #hide_input
130
+ #id interpolations
131
+ #caption A comparison of fastai's data augmentation strategy (left) and the traditional approach (right).
132
+ dblock1 = DataBlock(blocks=(ImageBlock(), CategoryBlock()),
133
+ get_y=parent_label,
134
+ item_tfms=Resize(460))
135
+ # Place an image in the 'images/grizzly.jpg' subfolder where this notebook is located before running this
136
+ dls1 = dblock1.dataloaders([(Path.cwd()/'images'/'grizzly.jpg')]*100, bs=8)
137
+ dls1.train.get_idxs = lambda: Inf.ones
138
+ x,y = dls1.valid.one_batch()
139
+ _,axs = subplots(1, 2)
140
+
141
+ x1 = TensorImage(x.clone())
142
+ x1 = x1.affine_coord(sz=224)
143
+ x1 = x1.rotate(draw=30, p=1.)
144
+ x1 = x1.zoom(draw=1.2, p=1.)
145
+ x1 = x1.warp(draw_x=-0.2, draw_y=0.2, p=1.)
146
+
147
+ tfms = setup_aug_tfms([Rotate(draw=30, p=1, size=224), Zoom(draw=1.2, p=1., size=224),
148
+ Warp(draw_x=-0.2, draw_y=0.2, p=1., size=224)])
149
+ x = Pipeline(tfms)(x)
150
+ #x.affine_coord(coord_tfm=coord_tfm, sz=size, mode=mode, pad_mode=pad_mode)
151
+ TensorImage(x[0]).show(ctx=axs[0])
152
+ TensorImage(x1[0]).show(ctx=axs[1]);
153
+ ```
154
+
155
+ ### BibTeX entry and citation info
156
+
157
+ ```bibtex
158
+ @book{howard2020deep,
159
+ author = {Howard, J. and Gugger, S.},
160
+ title = {Deep Learning for Coders with Fastai and Pytorch: AI Applications Without a PhD},
161
+ isbn = {9781492045526},
162
+ year = {2020},
163
+ url = {https://books.google.no/books?id=xd6LxgEACAAJ},
164
+ publisher = {O'Reilly Media, Incorporated},
165
+ }
166
+ ```