|
--- |
|
library_name: peft |
|
base_model: meta-llama/Llama-2-7b-chat-hf |
|
--- |
|
|
|
|
|
### Model Description |
|
|
|
- NSMC λ°μ΄ν°μ λν΄ meta-llama/Llama-2-7b-chat-hf λ―ΈμΈνλ |
|
- μν 리뷰 ν
μ€νΈλ₯Ό ν둬ννΈμ ν¬ν¨νμ¬ λͺ¨λΈμ μ
λ ₯νλ©΄ 'κΈμ ' λλ 'λΆμ 'μ΄λΌκ³ μμΈ‘ ν
μ€νΈλ₯Ό μ§μ μμ± |
|
- NSMCμ train μ€νλ¦Ώ μμ 2,000κ° μ΄μμ μνμ νμ΅μ μ¬μ© |
|
- test μ€νλ¦Ώ μμ 1,000κ°μ μνλ§ μΈ‘μ |
|
|
|
## Training procedure |
|
|
|
### Training hyperparameters |
|
|
|
The following hyperparameters were used during training: |
|
- learning_rate: 0.0001 |
|
- train_batch_size: 1 |
|
- eval_batch_size: 1 |
|
- seed: 42 |
|
- gradient_accumulation_steps: 2 |
|
- total_train_batch_size: 2 |
|
- optimizer: adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, |
|
- lr_scheduler_type: cosine |
|
- lr_scheduler_warmup_ratio: 0.03 |
|
- training_args.logging_steps: 100 |
|
- training_args.max_steps : 1600 |
|
- trainable params: 19,988,480 || all params: 6,758,404,096 || trainable%: 0.2957573965106688 |
|
|
|
### Training Results |
|
|
|
TrainOutput(global_step=1600, training_loss=0.7892872190475464, |
|
metrics={'train_runtime': 5825.2445, 'train_samples_per_second': 0.549, |
|
'train_steps_per_second': 0.275, 'total_flos': 6.51493254365184e+16, |
|
'train_loss': 0.7892872190475464, 'epoch': 1.6}) |
|
|
|
|
|
### Accuracy |
|
|
|
Llama2: μ νλ 0.52 |
|
|
|
| | TP | TN | |
|
|---|---|---| |
|
| PP | 192 | 168 | |
|
| PN | 317 | 324 | |
|
|
|
μ νλλ₯Ό ν₯μμν€κΈ° μν΄ μ¬λ¬ μ°¨λ‘ λ
Έλ ₯μ ν΄λ³΄μμ§λ§ λ°λ³΅ν΄μ μ€λ₯κ° λ°μνμμ΅λλ€. |
|
|
|
### Model Card Authors |
|
|
|
cxoijve |