Gemma-2B Fine-Tuned for Cover Letter Creation
Overview
Gemma-2B Fine-Tuned Python Model is a deep learning model based on the Gemma-2B architecture, fine-tuned specifically for Python programming tasks. This model is designed to understand Python code and assist developers by providing suggestions, completing code snippets, or offering corrections to improve code quality and efficiency.
Model Details
- Model Name: Gemma-2B Fine-Tuned for Cover Letter Creation
- Model Type: Deep Learning Model
- Base Model: Gemma-2B
- Language: Python
- Task: Creating Cover Letter
How to Use
- Install Gemma Python Package:
pip install -q -U transformers==4.38.0 pip install torch
Inference
- How to use the model in our notebook:
# Load model directly
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Mr-Vicky-01/Gemma2B-Finetuned-CoverLetter")
model = AutoModelForCausalLM.from_pretrained("Mr-Vicky-01/Gemma2B-Finetuned-CoverLetter")
job_title = "ML Engineer"
preferred_qualification = "strong AI realted skills"
hiring_company_name = "Google"
user_name = "Vicky"
past_working_experience= "N/A"
current_working_experience = "Fresher"
skilleset= "Machine Learning, Deep Learning, AI, SQL, NLP"
qualification = "Bachelor of commerce with computer application"
prompt_template = f"<start_of_turn>user Generate Cover Letter for Role: {job_title}, \
Preferred Qualifications: {preferred_qualification}, \
Hiring Company: {hiring_company_name}, User Name: {user_name}, \
Past Working Experience: {past_working_experience}, Current Working Experience: {current_working_experience}, \
Skillsets: {skilleset}, Qualifications: {qualification} <end_of_turn>\n<start_of_turn>model"
prompt = prompt_template
encodeds = tokenizer(prompt, return_tensors="pt", add_special_tokens=True).input_ids
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model.to(device)
inputs = encodeds.to(device)
# Increase max_new_tokens if needed
generated_ids = model.generate(inputs, max_new_tokens=250, do_sample=False, pad_token_id=tokenizer.eos_token_id)
ans = ''
for i in tokenizer.decode(generated_ids[0], skip_special_tokens=True).split('<end_of_turn>')[:2]:
ans += i
# Extract only the model's answer
model_answer = ans.split("model")[1].strip()
print(model_answer)
- Downloads last month
- 49
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.