cxfajar197 commited on
Commit
793f995
·
verified ·
1 Parent(s): 8671263

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +17 -24
README.md CHANGED
@@ -22,11 +22,8 @@ This is an Urdu OCR model designed for handwriting recognition tasks. It utilize
22
  This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
23
 
24
  - **Developed by:** Fajar Pervaiz
25
- - **Funded by [optional]:** [More Information Needed]
26
- - **Shared by [optional]:** [More Information Needed]
27
  - **Model type:** VisionEncoderDecoderModel
28
  - **Language(s) (NLP):** Urdu (ur)
29
- - **License:** [More Information Needed]
30
  - **Finetuned from model [optional]:** facebook/deit-base-distilled-patch16-384, bert-base-multilingual-cased
31
 
32
  ### Model Sources [optional]
@@ -46,28 +43,25 @@ This is the model card of a 🤗 transformers model that has been pushed on the
46
  <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
47
  This model can be directly used for Urdu handwriting recognition tasks, particularly for extracting text from scanned documents or handwritten notes.
48
 
49
- [More Information Needed]
50
 
51
  ### Downstream Use [optional]
52
 
53
  <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
54
  This model can be fine-tuned further for specific handwriting datasets or integrated into larger OCR systems for Urdu or multilingual text recognition.
55
 
56
- [More Information Needed]
57
 
58
  ### Out-of-Scope Use
59
 
60
  <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
61
  The model is not suitable for languages other than Urdu or domains with highly noisy or distorted images without further fine-tuning.
62
 
63
- [More Information Needed]
64
-
65
  ## Bias, Risks, and Limitations
66
 
67
  <!-- This section is meant to convey both technical and sociotechnical limitations. -->
68
  The model may exhibit biases inherent in the training data. Misrecognition of complex or ambiguous handwriting is possible. Users should carefully evaluate its performance in their specific use case.
69
 
70
- [More Information Needed]
71
 
72
  ### Recommendations
73
 
@@ -83,7 +77,7 @@ processor = TrOCRProcessor.from_pretrained("path/to/processor")
83
  model = VisionEncoderDecoderModel.from_pretrained("path/to/model")
84
 
85
 
86
- [More Information Needed]
87
 
88
  ## Training Details
89
 
@@ -93,7 +87,7 @@ model = VisionEncoderDecoderModel.from_pretrained("path/to/model")
93
  <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
94
  The training data comprises 46,742 image-text pairs from a custom dataset of Urdu handwritten texts.
95
 
96
- [More Information Needed]
97
 
98
  ### Training Procedure
99
 
@@ -102,12 +96,12 @@ Images were resized to 384x384 pixels and normalized. Augmentations such as Elas
102
 
103
  #### Preprocessing [optional]
104
 
105
- [More Information Needed]
106
 
107
 
108
  #### Training Hyperparameters
109
 
110
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
111
  - Training regime: Mixed precision (fp16)
112
  - Learning rate: 4e-5
113
  - Batch size: 8
@@ -119,7 +113,6 @@ Images were resized to 384x384 pixels and normalized. Augmentations such as Elas
119
 
120
  <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
121
 
122
- [More Information Needed]
123
 
124
  ## Evaluation
125
 
@@ -132,7 +125,7 @@ Images were resized to 384x384 pixels and normalized. Augmentations such as Elas
132
  <!-- This should link to a Dataset Card if possible. -->
133
  A subset of 4,675 image-text pairs was used for evaluation.
134
 
135
- [More Information Needed]
136
 
137
  #### Factors
138
 
@@ -141,17 +134,17 @@ The model was tested on handwritten text images with varying font styles and com
141
 
142
 
143
 
144
- [More Information Needed]
145
 
146
  #### Metrics
147
 
148
  <!-- These are the evaluation metrics being used, ideally with a description of why. -->
149
 
150
- [More Information Needed]
151
 
152
  ### Results
153
 
154
- [More Information Needed]
155
 
156
  #### Summary
157
 
@@ -161,7 +154,7 @@ The model was tested on handwritten text images with varying font styles and com
161
 
162
  <!-- Relevant interpretability work for the model goes here -->
163
 
164
- [More Information Needed]
165
 
166
  ## Environmental Impact
167
 
@@ -184,7 +177,7 @@ The model uses a VisionEncoderDecoder architecture combining a ViT encoder and a
184
 
185
  ### Compute Infrastructure
186
 
187
- [More Information Needed]
188
 
189
  #### Hardware
190
 
@@ -193,7 +186,7 @@ NVIDIA GPU (e.g., A100)
193
 
194
  #### Software
195
 
196
- [More Information Needed]
197
  Python, PyTorch, Hugging Face Transformers
198
 
199
 
@@ -208,7 +201,7 @@ Python, PyTorch, Hugging Face Transformers
208
 
209
  **APA:**
210
 
211
- [More Information Needed]
212
 
213
  ## Glossary [optional]
214
 
@@ -217,7 +210,7 @@ CER: Character Error Rate
217
  WER: Word Error Rate
218
  OCR: Optical Character Recognition
219
 
220
- [More Information Needed]
221
 
222
  ## More Information [optional]
223
 
@@ -225,10 +218,10 @@ OCR: Optical Character Recognition
225
 
226
  ## Model Card Authors [optional]
227
 
228
- [More Information Needed]
229
  Fajar Pervaiz
230
 
231
  ## Model Card Contact
232
 
233
- [More Information Needed]
234
 
22
  This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
23
 
24
  - **Developed by:** Fajar Pervaiz
 
 
25
  - **Model type:** VisionEncoderDecoderModel
26
  - **Language(s) (NLP):** Urdu (ur)
 
27
  - **Finetuned from model [optional]:** facebook/deit-base-distilled-patch16-384, bert-base-multilingual-cased
28
 
29
  ### Model Sources [optional]
 
43
  <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
44
  This model can be directly used for Urdu handwriting recognition tasks, particularly for extracting text from scanned documents or handwritten notes.
45
 
 
46
 
47
  ### Downstream Use [optional]
48
 
49
  <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
50
  This model can be fine-tuned further for specific handwriting datasets or integrated into larger OCR systems for Urdu or multilingual text recognition.
51
 
52
+
53
 
54
  ### Out-of-Scope Use
55
 
56
  <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
57
  The model is not suitable for languages other than Urdu or domains with highly noisy or distorted images without further fine-tuning.
58
 
 
 
59
  ## Bias, Risks, and Limitations
60
 
61
  <!-- This section is meant to convey both technical and sociotechnical limitations. -->
62
  The model may exhibit biases inherent in the training data. Misrecognition of complex or ambiguous handwriting is possible. Users should carefully evaluate its performance in their specific use case.
63
 
64
+
65
 
66
  ### Recommendations
67
 
 
77
  model = VisionEncoderDecoderModel.from_pretrained("path/to/model")
78
 
79
 
80
+
81
 
82
  ## Training Details
83
 
 
87
  <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
88
  The training data comprises 46,742 image-text pairs from a custom dataset of Urdu handwritten texts.
89
 
90
+
91
 
92
  ### Training Procedure
93
 
 
96
 
97
  #### Preprocessing [optional]
98
 
99
+
100
 
101
 
102
  #### Training Hyperparameters
103
 
104
+ - **Training regime:** <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
105
  - Training regime: Mixed precision (fp16)
106
  - Learning rate: 4e-5
107
  - Batch size: 8
 
113
 
114
  <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
115
 
 
116
 
117
  ## Evaluation
118
 
 
125
  <!-- This should link to a Dataset Card if possible. -->
126
  A subset of 4,675 image-text pairs was used for evaluation.
127
 
128
+
129
 
130
  #### Factors
131
 
 
134
 
135
 
136
 
137
+
138
 
139
  #### Metrics
140
 
141
  <!-- These are the evaluation metrics being used, ideally with a description of why. -->
142
 
143
+
144
 
145
  ### Results
146
 
147
+
148
 
149
  #### Summary
150
 
 
154
 
155
  <!-- Relevant interpretability work for the model goes here -->
156
 
157
+
158
 
159
  ## Environmental Impact
160
 
 
177
 
178
  ### Compute Infrastructure
179
 
180
+
181
 
182
  #### Hardware
183
 
 
186
 
187
  #### Software
188
 
189
+
190
  Python, PyTorch, Hugging Face Transformers
191
 
192
 
 
201
 
202
  **APA:**
203
 
204
+
205
 
206
  ## Glossary [optional]
207
 
 
210
  WER: Word Error Rate
211
  OCR: Optical Character Recognition
212
 
213
+
214
 
215
  ## More Information [optional]
216
 
 
218
 
219
  ## Model Card Authors [optional]
220
 
221
+
222
  Fajar Pervaiz
223
 
224
  ## Model Card Contact
225
 
226
+
227