Update README.md
Browse files
README.md
CHANGED
@@ -21,9 +21,9 @@ widget:
|
|
21 |
|
22 |
## 简介 Brief Introduction
|
23 |
|
24 |
-
2021年登顶FewCLUE和ZeroCLUE的中文BERT
|
25 |
|
26 |
-
This is the fine-tuned version of the Chinese BERT model on several
|
27 |
|
28 |
## 模型分类 Model Taxonomy
|
29 |
|
@@ -33,7 +33,7 @@ This is the fine-tuned version of the Chinese BERT model on several paraphrase d
|
|
33 |
|
34 |
## 模型信息 Model Information
|
35 |
|
36 |
-
基于[Erlangshen-MegatronBert-1.3B](https://huggingface.co/IDEA-CCNL/Erlangshen-MegatronBert-1.3B),我们在收集的20
|
37 |
|
38 |
Based on [Erlangshen-MegatronBert-1.3B] (https://huggingface.co/IDEA-CCNL/Erlangshen-MegatronBert-1.3B), we fine-tuned a similarity version on 8 Chinese paraphrase datasets, with totaling 227,347 samples.
|
39 |
|
|
|
21 |
|
22 |
## 简介 Brief Introduction
|
23 |
|
24 |
+
2021年登顶FewCLUE和ZeroCLUE的中文BERT,在数个相似度任务上微调后的版本
|
25 |
|
26 |
+
This is the fine-tuned version of the Chinese BERT model on several similarity datasets, which topped FewCLUE and ZeroCLUE benchmark in 2021
|
27 |
|
28 |
## 模型分类 Model Taxonomy
|
29 |
|
|
|
33 |
|
34 |
## 模型信息 Model Information
|
35 |
|
36 |
+
基于[Erlangshen-MegatronBert-1.3B](https://huggingface.co/IDEA-CCNL/Erlangshen-MegatronBert-1.3B),我们在收集的20个中文领域的改写数据集,总计227347个样本上微调了一个Similarity版本。
|
37 |
|
38 |
Based on [Erlangshen-MegatronBert-1.3B] (https://huggingface.co/IDEA-CCNL/Erlangshen-MegatronBert-1.3B), we fine-tuned a similarity version on 8 Chinese paraphrase datasets, with totaling 227,347 samples.
|
39 |
|