# Urdu GPT Model This is a GPT model trained on Urdu text data. ## Model Details - Architecture: GPT-2 style - Vocabulary Size: 50257 - Context Length: 256 - Embedding Dimension: 768 - Number of Layers: 12 - Number of Heads: 12 ## Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("your-username/model-name") tokenizer = AutoTokenizer.from_pretrained("your-username/model-name") # Generate text text = "عشق" inputs = tokenizer(text, return_tensors="pt") outputs = model.generate(**inputs) result = tokenizer.decode(outputs[0]) ```