LLAMA 3 Story Point Estimator - talendesb - mesos
This model is fine-tuned on issue descriptions from talendesb and tested on mesos for story point estimation.
Model Details
Base Model: LLAMA 3.2 1B
Training Project: talendesb
Test Project: mesos
Task: Story Point Estimation (Regression)
Architecture: PEFT (LoRA)
Input: Issue titles
Output: Story point estimation (continuous value)
Usage
from transformers import AutoModelForSequenceClassification, AutoTokenizer
from peft import PeftConfig, PeftModel
# Load peft config model
config = PeftConfig.from_pretrained("DEVCamiloSepulveda/000-LLAMA3SP-talendesb-mesos")
# Load tokenizer and model
tokenizer = AutoTokenizer.from_pretrained("DEVCamiloSepulveda/000-LLAMA3SP-talendesb-mesos")
base_model = AutoModelForSequenceClassification.from_pretrained(
config.base_model_name_or_path,
num_labels=1,
torch_dtype=torch.float16,
device_map='auto'
)
model = PeftModel.from_pretrained(base_model, "DEVCamiloSepulveda/000-LLAMA3SP-talendesb-mesos")
# Prepare input text
text = "Your issue description here"
inputs = tokenizer(text, return_tensors="pt", truncation=True, max_length=20, padding="max_length")
# Get prediction
outputs = model(**inputs)
story_points = outputs.logits.item()
Training Details
- Fine-tuning method: LoRA (Low-Rank Adaptation)
- Sequence length: 20 tokens
- Best training epoch: 14 / 20 epochs
- Batch size: 32
- Training time: 657.352 seconds
- Mean Absolute Error (MAE): 1.611
- Median Absolute Error (MdAE): 1.199
Framework versions
- PEFT 0.14.0
- Downloads last month
- 0
Inference API (serverless) does not yet support peft models for this pipeline type.
Model tree for DEVCamiloSepulveda/000-LLAMA3SP-talendesb-mesos
Base model
meta-llama/Llama-3.2-1BEvaluation results
- Mean Absolute Error (MAE) on mesos Datasettest set self-reported1.611
- Median Absolute Error (MdAE) on mesos Datasettest set self-reported1.199