Mr-Vicky-01's picture
Update README.md
ecf8dcc verified
|
raw
history blame
3.53 kB
metadata
license: mit
datasets:
  - ShashiVish/cover-letter-dataset
language:
  - en
widget:
  - example_title: Python!
    text: >-
      <start_of_turn>user Generate Cover Letter for Role: ML Engineer, \
      Preferred Qualifications: strong AI realted skills, \ Hiring Company:
      Google, User Name: Vicky, \ Past Working Experience: Intenship in
      CodeClause, Current Working Experience: Fresher, \ Skillsets: Machine
      Learning, Deep Learning, AI, SQL, NLP, Qualifications: Bachelor of
      commerce with computer application <end_of_turn>\n<start_of_turn>model
tags:
  - code
inference:
  parameters:
    max_new_tokens: 250
    do_sample: false
pipeline_tag: text2text-generation

Gemma-2B Fine-Tuned Python Model

Overview

Gemma-2B Fine-Tuned Python Model is a deep learning model based on the Gemma-2B architecture, fine-tuned specifically for Python programming tasks. This model is designed to understand Python code and assist developers by providing suggestions, completing code snippets, or offering corrections to improve code quality and efficiency.

Model Details

  • Model Name: Gemma-2B Fine-Tuned Python Model
  • Model Type: Deep Learning Model
  • Base Model: Gemma-2B
  • Language: Python
  • Task: Python Code Understanding and Assistance

Example Use Cases

  • Code completion: Automatically completing code snippets based on partial inputs.
  • Syntax correction: Identifying and suggesting corrections for syntax errors in Python code.
  • Code quality improvement: Providing suggestions to enhance code readability, efficiency, and maintainability.
  • Debugging assistance: Offering insights and suggestions to debug Python code by identifying potential errors or inefficiencies.

How to Use

  1. Install Gemma Python Package:
     pip install -q -U transformers==4.38.0
     pip install torch
    

Inference

  1. How to use the model in our notebook:
# Load model directly
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("Mr-Vicky-01/Gemma2B-Finetuned-CoverLetter")
model = AutoModelForCausalLM.from_pretrained("Mr-Vicky-01/Gemma2B-Finetuned-CoverLetter")

job_title = "ML Engineer"
preferred_qualification = "strong AI realted skills"
hiring_company_name = "Google"
user_name = "Vicky"
past_working_experience= "N/A"
current_working_experience = "Fresher"
skilleset= "Machine Learning, Deep Learning, AI, SQL, NLP"
qualification = "Bachelor of commerce with computer application"


prompt_template = f"<start_of_turn>user Generate Cover Letter for Role: {job_title}, \
 Preferred Qualifications: {preferred_qualification}, \
 Hiring Company: {hiring_company_name}, User Name: {user_name}, \
 Past Working Experience: {past_working_experience}, Current Working Experience: {current_working_experience}, \
 Skillsets: {skilleset}, Qualifications: {qualification} <end_of_turn>\n<start_of_turn>model"

prompt = prompt_template
encodeds = tokenizer(prompt, return_tensors="pt", add_special_tokens=True).input_ids

device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
merged_model.to(device)
inputs = encodeds.to(device)


# Increase max_new_tokens if needed
generated_ids = merged_model.generate(inputs, max_new_tokens=250, do_sample=False, pad_token_id=tokenizer.eos_token_id)
ans = ''
for i in tokenizer.decode(generated_ids[0], skip_special_tokens=True).split('<end_of_turn>')[:2]:
    ans += i

# Extract only the model's answer
model_answer = ans.split("model")[1].strip()
print(model_answer)