Saxo commited on
Commit
2f04075
·
verified ·
1 Parent(s): 072ed7f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -65,7 +65,7 @@ Hermes-3-Llama-3.1-70B 베이스모델을 사용해서 H100-80G 8개를 통해
65
 
66
  Finetuned by Mr. Yunsung Ji (Saxo), a data scientist at Linkbricks, a company specializing in AI and big data analytics <br>
67
  about 20% of total parameters Japanese CPT(Continued-Pretraining)->SFT->DPO training model based on Hermes-3-Llama-3.1-70B through 8 H100-80Gs as a Japanese boosting language model <br>
68
- It is a model that has been trained to handle Japanese-Korean-Chinese-English cross-training data and 10M korean news corpus and logic judgment data for various tasks to enable cross-fertilization processing and complex Korean logic & math problems. <br>
69
  -Tokenizer uses the base model without word expansion<br>
70
  -Models enhanced with high-dimensional analysis of customer reviews and social posts, as well as coding, writing, math and decision making<br>
71
  -Function Calling<br>
 
65
 
66
  Finetuned by Mr. Yunsung Ji (Saxo), a data scientist at Linkbricks, a company specializing in AI and big data analytics <br>
67
  about 20% of total parameters Japanese CPT(Continued-Pretraining)->SFT->DPO training model based on Hermes-3-Llama-3.1-70B through 8 H100-80Gs as a Japanese boosting language model <br>
68
+ It is a model that has been trained to handle Japanese-Korean-Chinese-English cross-training data and 20M Japanese news corpus and logic judgment data for various tasks to enable cross-fertilization processing and complex Korean logic & math problems. <br>
69
  -Tokenizer uses the base model without word expansion<br>
70
  -Models enhanced with high-dimensional analysis of customer reviews and social posts, as well as coding, writing, math and decision making<br>
71
  -Function Calling<br>