Update README.md
Browse files
README.md
CHANGED
@@ -37,9 +37,9 @@ Adopting a unified framework to handle multiple information extraction tasks, AI
|
|
37 |
|
38 |
参考论文:[Unified BERT for Few-shot Natural Language Understanding](https://arxiv.org/abs/2206.12094)
|
39 |
|
40 |
-
UBERT是[2022年AIWIN世界人工智能创新大赛:中文保险小样本多任务竞赛](http://ailab.aiwin.org.cn/competitions/68#results)的冠军解决方案。我们开发了一个基于类似BERT的骨干的多任务、多目标、统一的抽取任务框架。我们的UBERT在比赛A榜和B榜上均取得了第一名。因为比赛中的数据集在比赛结束后不再可用,我们开源的UBERT从多个任务中收集了70多个数据集(共1,065,069个样本)来进行预训练,并且我们选择了[MacBERT](https://huggingface.co/hfl/chinese-macbert-base)作为骨干网络。除了支持开箱即用之外,我们的UBERT还可以用于各种场景,如NLI、实体识别和阅读理解。示例代码可以在[Github](https://github.com/IDEA-CCNL/Fengshenbang-LM/tree/dev/yangping/fengshen/examples/ubert)中找到。
|
41 |
|
42 |
-
UBERT was the winner solution in the [2022 AIWIN ARTIFICIAL INTELLIGENCE WORLD INNOVATIONS: Chinese Insurance Small Sample Multi-Task](http://ailab.aiwin.org.cn/competitions/68#results). We developed a unified framework based on BERT-like backbone for multiple tasks and objectives. Our UBERT owns first place, as described in leaderboards A and B. In addition to the unavailable datasets in the challenge, we carefully collect over 70 datasets (1,065,069 samples in total) from a variety of tasks for open-source UBERT. Moreover, we apply [MacBERT](https://huggingface.co/hfl/chinese-macbert-base) as the backbone. Besides out-of-the-box functionality, our UBERT can be employed in various scenarios such as NLI, entity recognition, and reading comprehension. The example codes can be found in [Github](https://github.com/IDEA-CCNL/Fengshenbang-LM/tree/dev/yangping/fengshen/examples/ubert).
|
43 |
|
44 |
## 使用 Usage
|
45 |
|
|
|
37 |
|
38 |
参考论文:[Unified BERT for Few-shot Natural Language Understanding](https://arxiv.org/abs/2206.12094)
|
39 |
|
40 |
+
UBERT是[2022年AIWIN世界人工智能创新大赛:中文保险小样本多任务竞赛](http://ailab.aiwin.org.cn/competitions/68#results)的冠军解决方案。我们开发了一个基于类似BERT的骨干的多任务、多目标、统一的抽取任务框架。我们的UBERT在比赛A榜和B榜上均取得了第一名。因为比赛中的数据集在比赛结束后不再可用,我们开源的UBERT从多个任务中收集了70多个数据集(共1,065,069个样本)来进行预训练,并且我们选择了[MacBERT-Base](https://huggingface.co/hfl/chinese-macbert-base)作为骨干网络。除了支持开箱即用之外,我们的UBERT还可以用于各种场景,如NLI、实体识别和阅读理解。示例代码可以在[Github](https://github.com/IDEA-CCNL/Fengshenbang-LM/tree/dev/yangping/fengshen/examples/ubert)中找到。
|
41 |
|
42 |
+
UBERT was the winner solution in the [2022 AIWIN ARTIFICIAL INTELLIGENCE WORLD INNOVATIONS: Chinese Insurance Small Sample Multi-Task](http://ailab.aiwin.org.cn/competitions/68#results). We developed a unified framework based on BERT-like backbone for multiple tasks and objectives. Our UBERT owns first place, as described in leaderboards A and B. In addition to the unavailable datasets in the challenge, we carefully collect over 70 datasets (1,065,069 samples in total) from a variety of tasks for open-source UBERT. Moreover, we apply [MacBERT-Base](https://huggingface.co/hfl/chinese-macbert-base) as the backbone. Besides out-of-the-box functionality, our UBERT can be employed in various scenarios such as NLI, entity recognition, and reading comprehension. The example codes can be found in [Github](https://github.com/IDEA-CCNL/Fengshenbang-LM/tree/dev/yangping/fengshen/examples/ubert).
|
43 |
|
44 |
## 使用 Usage
|
45 |
|