|
---
|
|
license: llama3.2
|
|
language:
|
|
- en
|
|
base_model:
|
|
- meta-llama/Llama-3.2-1B
|
|
pipeline_tag: text-classification
|
|
library_name: transformers
|
|
tags:
|
|
- regression
|
|
- story-point-estimation
|
|
- software-engineering
|
|
datasets:
|
|
- bamboo
|
|
metrics:
|
|
- mae
|
|
- mdae
|
|
|
|
model-index:
|
|
- name: llama-3.2-1b-story-point-estimation
|
|
results:
|
|
- task:
|
|
type: regression
|
|
name: Story Point Estimation
|
|
dataset:
|
|
type: bamboo
|
|
name: bamboo Dataset
|
|
split: test
|
|
metrics:
|
|
- type: mae
|
|
value: 1.104
|
|
name: Mean Absolute Error (MAE)
|
|
- type: mdae
|
|
value: 0.832
|
|
name: Median Absolute Error (MdAE)
|
|
---
|
|
# LLAMA 3 Story Point Estimator - bamboo
|
|
|
|
This model is fine-tuned on issue descriptions from bamboo and tested on bamboo for story point estimation.
|
|
|
|
## Model Details
|
|
- Base Model: LLAMA 3.2 1B
|
|
- Training Project: bamboo
|
|
- Test Project: bamboo
|
|
- Task: Story Point Estimation (Regression)
|
|
- Architecture: PEFT (LoRA)
|
|
|
|
- Input: Issue titles
|
|
- Output: Story point estimation (continuous value)
|
|
|
|
## Usage
|
|
```python
|
|
from transformers import AutoModelForSequenceClassification, AutoTokenizer
|
|
from peft import PeftModel
|
|
|
|
# Load tokenizer and model
|
|
tokenizer = AutoTokenizer.from_pretrained("DEVCamiloSepulveda/0-LLAMA3SP-bamboo")
|
|
model = AutoModelForSequenceClassification.from_pretrained("DEVCamiloSepulveda/0-LLAMA3SP-bamboo")
|
|
|
|
# Prepare input text
|
|
text = "Your issue description here"
|
|
inputs = tokenizer(text, return_tensors="pt", truncation=True, max_length=20, padding="max_length")
|
|
|
|
# Get prediction
|
|
outputs = model(**inputs)
|
|
story_points = outputs.logits.item()
|
|
```
|
|
|
|
## Training Details
|
|
- Fine-tuning method: LoRA (Low-Rank Adaptation)
|
|
- Sequence length: 20 tokens
|
|
- Best training epoch: 0 / 20 epochs
|
|
- Batch size: 32
|
|
- Training time: 11.101 seconds
|
|
- Mean Absolute Error (MAE): 1.104
|
|
- Median Absolute Error (MdAE): 0.832
|
|
|