--- library_name: transformers license: apache-2.0 base_model: google/gemma-2-27b-it datasets: - Saxo/ko_cn_translation_tech_social_science_linkbricks_single_dataset - Saxo/ko_jp_translation_tech_social_science_linkbricks_single_dataset - Saxo/en_ko_translation_tech_science_linkbricks_single_dataset_with_prompt_text_huggingface - Saxo/en_ko_translation_social_science_linkbricks_single_dataset_with_prompt_text_huggingface - Saxo/ko_aspect_sentiment_sns_mall_sentiment_linkbricks_single_dataset_with_prompt_text_huggingface - Saxo/ko_summarization_linkbricks_single_dataset_with_prompt_text_huggingface - Saxo/OpenOrca_cleaned_kor_linkbricks_single_dataset_with_prompt_text_huggingface - Saxo/ko_government_qa_total_linkbricks_single_dataset_with_prompt_text_huggingface_sampled - maywell/ko_Ultrafeedback_binarized language: - ko - en - jp - cn pipeline_tag: text-generation --- # Model Card for Model ID

Open Ko LLM Leaderboard Season 2 ๐Ÿ† Rank-1 2024/11/01~2024/12/28


AI ์™€ ๋น…๋ฐ์ดํ„ฐ ๋ถ„์„ ์ „๋ฌธ ๊ธฐ์—…์ธ Linkbricks์˜ ๋ฐ์ดํ„ฐ์‚ฌ์ด์–ธํ‹ฐ์ŠคํŠธ์ธ ์ง€์œค์„ฑ(Saxo) ๋ฐ•์‚ฌ๊ฐ€ gemma-2-27b-it ๋ฒ ์ด์Šค๋ชจ๋ธ์„ H100-80G 8๊ฐœ๋ฅผ ํ†ตํ•ด SFT->DPO ํŒŒ์ธ ํŠœ๋‹์„ ํ•œ ํ•œ๊ธ€ ์–ธ์–ด ๋ชจ๋ธ๋กœ ํ•œ๊ตญ์–ด-์ค‘๊ตญ์–ด-์˜์–ด-์ผ๋ณธ์–ด ๊ต์ฐจ ํ•™์Šต ๋ฐ์ดํ„ฐ์™€ ๋กœ์ง€์ปฌ ๋ฐ์ดํ„ฐ๋ฅผ ํ†ตํ•˜์—ฌ ํ•œ์ค‘์ผ์˜ ์–ธ์–ด ๊ต์ฐจ ์ฆ๊ฐ• ์ฒ˜๋ฆฌ์™€ ๋ณต์žกํ•œ ํ•œ๊ธ€ ๋…ผ๋ฆฌ ๋ฌธ์ œ ์—ญ์‹œ ๋Œ€์‘ ๊ฐ€๋Šฅํ•˜๋„๋ก ํ›ˆ๋ จํ•œ ๋ชจ๋ธ์ด๋ฉฐ ํ† ํฌ๋‚˜์ด์ €๋Š” ๋‹จ์–ด ํ™•์žฅ ์—†์ด ๋ฒ ์ด์Šค ๋ชจ๋ธ ๊ทธ๋Œ€๋กœ ์‚ฌ์šฉ. ํŠนํžˆ ๊ณ ๊ฐ ๋ฆฌ๋ทฐ๋‚˜ ์†Œ์…œ ํฌ์ŠคํŒ… ๊ณ ์ฐจ์› ๋ถ„์„ ๋ฐ ์ฝ”๋”ฉ๋“ฑ์ด ๊ฐ•ํ™”๋œ ๋ชจ๋ธ
-Deepspeed Stage=3, rslora ๋ฐ BAdam Layer Mode ์‚ฌ์šฉ
-ollama run benedict/linkbricks-gemma2-korean:27b Benchmark (Open Ko LLM Leader Board Season 2 : No. 1)
Model : Saxo/Linkbricks-Horizon-AI-Korean-Gemma-2-sft-dpo-27B
Average : 51.37
Ko-GPQA : 25.25
Ko-Winogrande : 68.27
Ko-GSM8k : 70.96
Ko-EQ Bench : 50.25
Ko-IFEval : 49.84
KorNAT-CKA : 34.59
KorNAT-SVA : 48.42
Ko-Harmlessness : 65.66
Ko-Helpfulness : 49.12



Dr. Yunsung Ji (Saxo), a data scientist at Linkbricks, a company specializing in AI and big data analytics, fine-tuned the gemma-2-27b-it base model with SFT->DPO using four H100-80Gs. It is a Korean language model trained to handle complex Korean logic problems through Korean-Chinese-English-Japanese cross-training data and logical data, and Tokenizer uses the base model without word expansion. www.linkbricks.com, www.linkbricks.vc