poly-ko-1.3b-translate

  • EleutherAI/polyglot-ko-1.3b을 squarelike/sharegpt_deepl_ko_translation으로 영한 번역만 가능하도록 fine-tuning한 모델
  • QRoLA기법으로 fine-tunnig

훈련 정보

  • Epoch: 1
  • learning-rate: 3e-4
  • batch_size: 3
  • Lora r: 8
  • Lora target modules: query_key_value

3090GPU 1대로 훈련했습니다.

Downloads last month
91
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Dataset used to train aeolian83/poly-ko-1.3b-translate