|
--- |
|
library_name: transformers |
|
language: |
|
- ur |
|
--- |
|
|
|
# Model Card for Model ID |
|
|
|
<!-- Provide a quick summary of what the model is/does. --> |
|
|
|
|
|
This is an Urdu OCR model designed for handwriting recognition tasks. It utilizes a VisionEncoderDecoderModel with a ViT-based encoder and a BERT-based decoder, fine-tuned on a custom dataset for robust and accurate text extraction from images. |
|
|
|
|
|
|
|
## Model Details |
|
|
|
### Model Description |
|
|
|
<!-- Provide a longer summary of what this model is. --> |
|
|
|
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. |
|
|
|
- **Developed by:** Fajar Pervaiz |
|
- **Model type:** VisionEncoderDecoderModel |
|
- **Language(s) (NLP):** Urdu (ur) |
|
- **Finetuned from model [optional]:** facebook/deit-base-distilled-patch16-384, bert-base-multilingual-cased |
|
|
|
### Model Sources [optional] |
|
|
|
<!-- Provide the basic links for the model. --> |
|
|
|
- **Repository:** [More Information Needed] |
|
- **Paper [optional]:** [More Information Needed] |
|
- **Demo [optional]:** [More Information Needed] |
|
|
|
## Uses |
|
|
|
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> |
|
|
|
### Direct Use |
|
|
|
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> |
|
This model can be directly used for Urdu handwriting recognition tasks, particularly for extracting text from scanned documents or handwritten notes. |
|
|
|
|
|
### Downstream Use [optional] |
|
|
|
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> |
|
This model can be fine-tuned further for specific handwriting datasets or integrated into larger OCR systems for Urdu or multilingual text recognition. |
|
|
|
|
|
|
|
### Out-of-Scope Use |
|
|
|
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> |
|
The model is not suitable for languages other than Urdu or domains with highly noisy or distorted images without further fine-tuning. |
|
|
|
## Bias, Risks, and Limitations |
|
|
|
<!-- This section is meant to convey both technical and sociotechnical limitations. --> |
|
The model may exhibit biases inherent in the training data. Misrecognition of complex or ambiguous handwriting is possible. Users should carefully evaluate its performance in their specific use case. |
|
|
|
|
|
|
|
### Recommendations |
|
|
|
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> |
|
|
|
Users should test the model thoroughly on their specific dataset and consider additional fine-tuning if required. Misuse in sensitive applications (e.g., legal or medical document OCR) should be avoided without rigorous evaluation. |
|
## How to Get Started with the Model |
|
|
|
Use the code below to get started with the model. |
|
|
|
from transformers import VisionEncoderDecoderModel, TrOCRProcessor |
|
processor = TrOCRProcessor.from_pretrained("path/to/processor") |
|
model = VisionEncoderDecoderModel.from_pretrained("path/to/model") |
|
|
|
|
|
|
|
|
|
## Training Details |
|
|
|
|
|
### Training Data |
|
|
|
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> |
|
The training data comprises 46,742 image-text pairs from a custom dataset of Urdu handwritten texts. |
|
|
|
|
|
|
|
### Training Procedure |
|
|
|
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> |
|
Images were resized to 384x384 pixels and normalized. Augmentations such as Elastic Transform and Gaussian Blur were applied to enhance robustness. |
|
|
|
#### Preprocessing [optional] |
|
|
|
|
|
|
|
|
|
#### Training Hyperparameters |
|
|
|
- **Training regime:** <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> |
|
- Training regime: Mixed precision (fp16) |
|
- Learning rate: 4e-5 |
|
- Batch size: 8 |
|
- Epochs: 12 |
|
- Optimizer: AdamW |
|
- Scheduler: Linear decay |
|
|
|
#### Speeds, Sizes, Times [optional] |
|
|
|
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> |
|
|
|
|
|
## Evaluation |
|
|
|
<!-- This section describes the evaluation protocols and provides the results. --> |
|
|
|
### Testing Data, Factors & Metrics |
|
|
|
#### Testing Data |
|
|
|
<!-- This should link to a Dataset Card if possible. --> |
|
A subset of 4,675 image-text pairs was used for evaluation. |
|
|
|
|
|
|
|
#### Factors |
|
|
|
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> |
|
The model was tested on handwritten text images with varying font styles and complexities. |
|
|
|
|
|
|
|
|
|
|
|
#### Metrics |
|
|
|
<!-- These are the evaluation metrics being used, ideally with a description of why. --> |
|
|
|
|
|
|
|
### Results |
|
|
|
|
|
|
|
#### Summary |
|
|
|
|
|
|
|
## Model Examination [optional] |
|
|
|
<!-- Relevant interpretability work for the model goes here --> |
|
|
|
|
|
|
|
## Environmental Impact |
|
|
|
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> |
|
|
|
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). |
|
|
|
- **Hardware Type:** NVIDIA GPU |
|
- **Hours used:** [More Information Needed] |
|
- **Cloud Provider:** [More Information Needed] |
|
- **Compute Region:** [More Information Needed] |
|
- **Carbon Emitted:** [More Information Needed] |
|
|
|
## Technical Specifications [optional] |
|
|
|
### Model Architecture and Objective |
|
|
|
The model uses a VisionEncoderDecoder architecture combining a ViT encoder and a BERT decoder. |
|
|
|
|
|
### Compute Infrastructure |
|
|
|
|
|
|
|
#### Hardware |
|
|
|
NVIDIA GPU (e.g., A100) |
|
|
|
|
|
#### Software |
|
|
|
|
|
Python, PyTorch, Hugging Face Transformers |
|
|
|
|
|
|
|
## Citation [optional] |
|
|
|
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> |
|
|
|
**BibTeX:** |
|
|
|
[More Information Needed] |
|
|
|
**APA:** |
|
|
|
|
|
|
|
## Glossary [optional] |
|
|
|
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> |
|
CER: Character Error Rate |
|
WER: Word Error Rate |
|
OCR: Optical Character Recognition |
|
|
|
|
|
|
|
## More Information [optional] |
|
|
|
[More Information Needed] |
|
|
|
## Model Card Authors [optional] |
|
|
|
|
|
Fajar Pervaiz |
|
|
|
## Model Card Contact |
|
|
|
|
|
[email protected] |