|
---
|
|
license: llama3.2
|
|
language:
|
|
- en
|
|
base_model:
|
|
- meta-llama/Llama-3.2-1B
|
|
pipeline_tag: text-classification
|
|
library_name: transformers
|
|
tags:
|
|
- regression
|
|
- story-point-estimation
|
|
- software-engineering
|
|
datasets:
|
|
- datamanagement
|
|
metrics:
|
|
- mae
|
|
- mdae
|
|
|
|
model-index:
|
|
- name: llama-3.2-1b-story-point-estimation
|
|
results:
|
|
- task:
|
|
type: regression
|
|
name: Story Point Estimation
|
|
dataset:
|
|
type: datamanagement
|
|
name: datamanagement Dataset
|
|
split: test
|
|
metrics:
|
|
- type: mae
|
|
value: 6.511
|
|
name: Mean Absolute Error (MAE)
|
|
- type: mdae
|
|
value: 3.661
|
|
name: Median Absolute Error (MdAE)
|
|
---
|
|
# LLAMA 3 Story Point Estimator - datamanagement
|
|
|
|
This model is fine-tuned on issue descriptions from datamanagement and tested on datamanagement for story point estimation.
|
|
|
|
## Model Details
|
|
- Base Model: LLAMA 3.2 1B
|
|
- Training Project: datamanagement
|
|
- Test Project: datamanagement
|
|
- Task: Story Point Estimation (Regression)
|
|
- Architecture: PEFT (LoRA)
|
|
|
|
- Input: Issue titles
|
|
- Output: Story point estimation (continuous value)
|
|
|
|
## Usage
|
|
```python
|
|
from transformers import AutoModelForSequenceClassification, AutoTokenizer
|
|
from peft import PeftModel
|
|
|
|
# Load tokenizer and model
|
|
tokenizer = AutoTokenizer.from_pretrained("DEVCamiloSepulveda/0-LLAMA3SP-datamanagement")
|
|
model = AutoModelForSequenceClassification.from_pretrained("DEVCamiloSepulveda/0-LLAMA3SP-datamanagement")
|
|
|
|
# Prepare input text
|
|
text = "Your issue description here"
|
|
inputs = tokenizer(text, return_tensors="pt", truncation=True, max_length=20, padding="max_length")
|
|
|
|
# Get prediction
|
|
outputs = model(**inputs)
|
|
story_points = outputs.logits.item()
|
|
```
|
|
|
|
## Training Details
|
|
- Fine-tuning method: LoRA (Low-Rank Adaptation)
|
|
- Sequence length: 20 tokens
|
|
- Best training epoch: 0 / 20 epochs
|
|
- Batch size: 32
|
|
- Training time: 101.233 seconds
|
|
- Mean Absolute Error (MAE): 6.511
|
|
- Median Absolute Error (MdAE): 3.661
|
|
|