|
--- |
|
license: apache-2.0 |
|
library_name: peft |
|
tags: |
|
- trl |
|
- dpo |
|
- generated_from_trainer |
|
base_model: jingyeom/freeze_KoSoLAR-10.7B-v0.2_1.4_dedup |
|
model-index: |
|
- name: lora_freeze_KoSoLAR-10.7B-v0.2_1.4_dedup_SFT-DPO |
|
results: [] |
|
--- |
|
|
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You |
|
should probably proofread and complete it, then remove this comment. --> |
|
|
|
# lora_freeze_KoSoLAR-10.7B-v0.2_1.4_dedup_SFT-DPO |
|
|
|
This model is a fine-tuned version of [jingyeom/freeze_KoSoLAR-10.7B-v0.2_1.4_dedup](https://huggingface.co/jingyeom/freeze_KoSoLAR-10.7B-v0.2_1.4_dedup) on an unknown dataset. |
|
|
|
## Model description |
|
|
|
More information needed |
|
|
|
## Intended uses & limitations |
|
|
|
More information needed |
|
|
|
## Training and evaluation data |
|
|
|
More information needed |
|
|
|
## Training procedure |
|
|
|
### Training results |
|
|
|
|
|
|
|
### Framework versions |
|
|
|
- PEFT 0.7.1 |
|
- Transformers 4.36.2 |
|
- Pytorch 2.1.0+cu121 |
|
- Datasets 2.16.1 |
|
- Tokenizers 0.15.0 |