Savannah120 commited on
Commit
1b8416c
·
1 Parent(s): bce5dd4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -31,9 +31,9 @@ The MacBERT with 325M parameters is pre-trained for Chinese NLI tasks, and finet
31
  ## 模型信息 Model Information
32
 
33
 
34
- 为了提高模型在NLI上的效果,我们收集了大量NLI进行预训练,随后在FewCLUE的OCNLI任务进行微调,所有的训练均基于我们提出的UniMC框架。最终结果表明,3.25亿参数的模型通过我们的训练策略在NLI任务上可以达到1.3亿参数大模型的效果。
35
 
36
- To imporve the model performance on the NLI task, we collected numerous NLI datasets for pre-training. Then the model was finetuned on a specfic NLI task, OCNLI from FewCLUE. All the training is based on the UniMC framework we proposed. The results show that our model with 325M parameters could achieve comparable performance of the model with 1.3B paramters on the NLI task via our training strategies.
37
 
38
  ### 下游效果 Performance
39
 
 
31
  ## 模型信息 Model Information
32
 
33
 
34
+ 为了提高模型在NLI上的效果,我们收集了大量NLI进行预训练,随后在FewCLUE的OCNLI任务进行微调,所有的训练均基于我们提出的UniMC框架。最终结果表明,3.25亿参数的模型通过我们的训练策略在NLI任务上可以达到1.3亿参数大模型相当的效果。
35
 
36
+ To improve the model performance on the NLI task, we collected numerous NLI datasets for pre-training. Then the model was finetuned on a specific NLI task, OCNLI from FewCLUE. All the training is based on the UniMC framework we proposed. The results show that our model with 325M parameters could achieve comparable performance to the model with 1.3B parameters on the NLI task via our training strategies.
37
 
38
  ### 下游效果 Performance
39