cxfajar197 commited on
Commit
8671263
·
verified ·
1 Parent(s): 6edaec0

Update README.mt

Browse files
Files changed (1) hide show
  1. README.md +46 -11
README.md CHANGED
@@ -1,6 +1,7 @@
1
  ---
2
  library_name: transformers
3
- tags: []
 
4
  ---
5
 
6
  # Model Card for Model ID
@@ -8,6 +9,9 @@ tags: []
8
  <!-- Provide a quick summary of what the model is/does. -->
9
 
10
 
 
 
 
11
 
12
  ## Model Details
13
 
@@ -17,13 +21,13 @@ tags: []
17
 
18
  This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
19
 
20
- - **Developed by:** [More Information Needed]
21
  - **Funded by [optional]:** [More Information Needed]
22
  - **Shared by [optional]:** [More Information Needed]
23
- - **Model type:** [More Information Needed]
24
- - **Language(s) (NLP):** [More Information Needed]
25
  - **License:** [More Information Needed]
26
- - **Finetuned from model [optional]:** [More Information Needed]
27
 
28
  ### Model Sources [optional]
29
 
@@ -40,24 +44,28 @@ This is the model card of a 🤗 transformers model that has been pushed on the
40
  ### Direct Use
41
 
42
  <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
 
43
 
44
  [More Information Needed]
45
 
46
  ### Downstream Use [optional]
47
 
48
  <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
 
49
 
50
  [More Information Needed]
51
 
52
  ### Out-of-Scope Use
53
 
54
  <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
 
55
 
56
  [More Information Needed]
57
 
58
  ## Bias, Risks, and Limitations
59
 
60
  <!-- This section is meant to convey both technical and sociotechnical limitations. -->
 
61
 
62
  [More Information Needed]
63
 
@@ -65,25 +73,32 @@ This is the model card of a 🤗 transformers model that has been pushed on the
65
 
66
  <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
 
68
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
-
70
  ## How to Get Started with the Model
71
 
72
  Use the code below to get started with the model.
73
 
 
 
 
 
 
74
  [More Information Needed]
75
 
76
  ## Training Details
77
 
 
78
  ### Training Data
79
 
80
  <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
 
81
 
82
  [More Information Needed]
83
 
84
  ### Training Procedure
85
 
86
  <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
 
87
 
88
  #### Preprocessing [optional]
89
 
@@ -93,6 +108,12 @@ Use the code below to get started with the model.
93
  #### Training Hyperparameters
94
 
95
  - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
 
 
 
 
 
 
96
 
97
  #### Speeds, Sizes, Times [optional]
98
 
@@ -109,12 +130,16 @@ Use the code below to get started with the model.
109
  #### Testing Data
110
 
111
  <!-- This should link to a Dataset Card if possible. -->
 
112
 
113
  [More Information Needed]
114
 
115
  #### Factors
116
 
117
  <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
 
 
 
118
 
119
  [More Information Needed]
120
 
@@ -144,7 +169,7 @@ Use the code below to get started with the model.
144
 
145
  Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
 
147
- - **Hardware Type:** [More Information Needed]
148
  - **Hours used:** [More Information Needed]
149
  - **Cloud Provider:** [More Information Needed]
150
  - **Compute Region:** [More Information Needed]
@@ -154,7 +179,8 @@ Carbon emissions can be estimated using the [Machine Learning Impact calculator]
154
 
155
  ### Model Architecture and Objective
156
 
157
- [More Information Needed]
 
158
 
159
  ### Compute Infrastructure
160
 
@@ -162,11 +188,15 @@ Carbon emissions can be estimated using the [Machine Learning Impact calculator]
162
 
163
  #### Hardware
164
 
165
- [More Information Needed]
 
166
 
167
  #### Software
168
 
169
  [More Information Needed]
 
 
 
170
 
171
  ## Citation [optional]
172
 
@@ -183,6 +213,9 @@ Carbon emissions can be estimated using the [Machine Learning Impact calculator]
183
  ## Glossary [optional]
184
 
185
  <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
 
 
 
186
 
187
  [More Information Needed]
188
 
@@ -193,7 +226,9 @@ Carbon emissions can be estimated using the [Machine Learning Impact calculator]
193
  ## Model Card Authors [optional]
194
 
195
  [More Information Needed]
 
196
 
197
  ## Model Card Contact
198
 
199
- [More Information Needed]
 
 
1
  ---
2
  library_name: transformers
3
+ language:
4
+ - ur
5
  ---
6
 
7
  # Model Card for Model ID
 
9
  <!-- Provide a quick summary of what the model is/does. -->
10
 
11
 
12
+ This is an Urdu OCR model designed for handwriting recognition tasks. It utilizes a VisionEncoderDecoderModel with a ViT-based encoder and a BERT-based decoder, fine-tuned on a custom dataset for robust and accurate text extraction from images.
13
+
14
+
15
 
16
  ## Model Details
17
 
 
21
 
22
  This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
23
 
24
+ - **Developed by:** Fajar Pervaiz
25
  - **Funded by [optional]:** [More Information Needed]
26
  - **Shared by [optional]:** [More Information Needed]
27
+ - **Model type:** VisionEncoderDecoderModel
28
+ - **Language(s) (NLP):** Urdu (ur)
29
  - **License:** [More Information Needed]
30
+ - **Finetuned from model [optional]:** facebook/deit-base-distilled-patch16-384, bert-base-multilingual-cased
31
 
32
  ### Model Sources [optional]
33
 
 
44
  ### Direct Use
45
 
46
  <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
47
+ This model can be directly used for Urdu handwriting recognition tasks, particularly for extracting text from scanned documents or handwritten notes.
48
 
49
  [More Information Needed]
50
 
51
  ### Downstream Use [optional]
52
 
53
  <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
54
+ This model can be fine-tuned further for specific handwriting datasets or integrated into larger OCR systems for Urdu or multilingual text recognition.
55
 
56
  [More Information Needed]
57
 
58
  ### Out-of-Scope Use
59
 
60
  <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
61
+ The model is not suitable for languages other than Urdu or domains with highly noisy or distorted images without further fine-tuning.
62
 
63
  [More Information Needed]
64
 
65
  ## Bias, Risks, and Limitations
66
 
67
  <!-- This section is meant to convey both technical and sociotechnical limitations. -->
68
+ The model may exhibit biases inherent in the training data. Misrecognition of complex or ambiguous handwriting is possible. Users should carefully evaluate its performance in their specific use case.
69
 
70
  [More Information Needed]
71
 
 
73
 
74
  <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
75
 
76
+ Users should test the model thoroughly on their specific dataset and consider additional fine-tuning if required. Misuse in sensitive applications (e.g., legal or medical document OCR) should be avoided without rigorous evaluation.
 
77
  ## How to Get Started with the Model
78
 
79
  Use the code below to get started with the model.
80
 
81
+ from transformers import VisionEncoderDecoderModel, TrOCRProcessor
82
+ processor = TrOCRProcessor.from_pretrained("path/to/processor")
83
+ model = VisionEncoderDecoderModel.from_pretrained("path/to/model")
84
+
85
+
86
  [More Information Needed]
87
 
88
  ## Training Details
89
 
90
+
91
  ### Training Data
92
 
93
  <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
94
+ The training data comprises 46,742 image-text pairs from a custom dataset of Urdu handwritten texts.
95
 
96
  [More Information Needed]
97
 
98
  ### Training Procedure
99
 
100
  <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
101
+ Images were resized to 384x384 pixels and normalized. Augmentations such as Elastic Transform and Gaussian Blur were applied to enhance robustness.
102
 
103
  #### Preprocessing [optional]
104
 
 
108
  #### Training Hyperparameters
109
 
110
  - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
111
+ - Training regime: Mixed precision (fp16)
112
+ - Learning rate: 4e-5
113
+ - Batch size: 8
114
+ - Epochs: 12
115
+ - Optimizer: AdamW
116
+ - Scheduler: Linear decay
117
 
118
  #### Speeds, Sizes, Times [optional]
119
 
 
130
  #### Testing Data
131
 
132
  <!-- This should link to a Dataset Card if possible. -->
133
+ A subset of 4,675 image-text pairs was used for evaluation.
134
 
135
  [More Information Needed]
136
 
137
  #### Factors
138
 
139
  <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
140
+ The model was tested on handwritten text images with varying font styles and complexities.
141
+
142
+
143
 
144
  [More Information Needed]
145
 
 
169
 
170
  Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
171
 
172
+ - **Hardware Type:** NVIDIA GPU
173
  - **Hours used:** [More Information Needed]
174
  - **Cloud Provider:** [More Information Needed]
175
  - **Compute Region:** [More Information Needed]
 
179
 
180
  ### Model Architecture and Objective
181
 
182
+ The model uses a VisionEncoderDecoder architecture combining a ViT encoder and a BERT decoder.
183
+
184
 
185
  ### Compute Infrastructure
186
 
 
188
 
189
  #### Hardware
190
 
191
+ NVIDIA GPU (e.g., A100)
192
+
193
 
194
  #### Software
195
 
196
  [More Information Needed]
197
+ Python, PyTorch, Hugging Face Transformers
198
+
199
+
200
 
201
  ## Citation [optional]
202
 
 
213
  ## Glossary [optional]
214
 
215
  <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
216
+ CER: Character Error Rate
217
+ WER: Word Error Rate
218
+ OCR: Optical Character Recognition
219
 
220
  [More Information Needed]
221
 
 
226
  ## Model Card Authors [optional]
227
 
228
  [More Information Needed]
229
+ Fajar Pervaiz
230
 
231
  ## Model Card Contact
232
 
233
+ [More Information Needed]
234