--- widget: - text: 'I love AutoTrain because ' license: mit language: - en library_name: peft pipeline_tag: text-generation --- --- ### Base Model Description The Pythia 70M model is a transformer-based language model developed by EleutherAI. It is part of the Pythia series, known for its high performance in natural language understanding and generation tasks. With 70 million parameters, it is designed to handle a wide range of NLP applications, offering a balance between computational efficiency and model capability. This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** Pravin Maurya - **Model type:** LoRa fine-tuned transformer model - **Language(s) (NLP):** English - **License:** MIT - **Finetuned from model:** EleutherAI/pythia-70m ### Model Sources [optional] - **Colab Link:** [Click meπŸ”—](https://colab.research.google.com/drive/1tyogv7jtc8a4h23pEIlJW2vBgWTTzy3e#scrollTo=b6fQzRl2faSn) ## Uses Downstream uses are model can be fine-tuned further for specific applications like medical AI assistants, legal document generation, and other domain-specific NLP tasks. ## How to Get Started with the Model Use the code below to get started with the model. ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("Pravincoder/Pythia-legal-finetuned-llm") tokenizer = AutoTokenizer.from_pretrained("EleutherAI/pythia-70m") def inference(text, model, tokenizer, max_input_tokens=1000, max_output_tokens=200): input_ids = tokenizer.encode(text, return_tensors="pt", truncation=True, max_length=max_input_tokens) device = model.device generated_tokens_with_prompt = model.generate(input_ids=input_ids.to(device), max_length=max_output_tokens) generated_text_with_prompt = tokenizer.batch_decode(generated_tokens_with_prompt, skip_special_tokens=True) generated_text_answer = generated_text_with_prompt[0][len(text):] return generated_text_answer system_message = "Welcome to the medical AI assistant." user_message = "What are the symptoms of influenza?" generated_response = inference(system_message, user_message, model, tokenizer) print("Generated Response:", generated_response) ``` ## Training Data The model was fine-tuned using data relevant to the Indian Traffic Law data it is a Private DataSet. ### Training Procedure Data preprocessing involved tokenization and formatting suitable for the transformer model. #### Training Hyperparameters -Training regime: Mixed precision (fp16) ## Hardware - **Hardware Type:** T4 Google Colab GPU - **Hours used:** 2-4 hr ## Model Card Contact Email :- PravinCoder@gmail.com # Model Trained Using AutoTrain