metadata
language:
- zh
license: apache-2.0
tags:
- bert
- NLU
- Sentiment
- Chinese
inference: false
widget:
- text: 今天心情不好
Erlangshen-Ubert-110M, model (Chinese),one model of Fengshenbang-LM.
We collect 70+ datasets in the Chinese domain for finetune, with a total of 1065069 samples. Our model is mainly based on macbert
Ubert is a solution we proposed when we were doing the 2022 AIWIN World Artificial Intelligence Innovation Competition, and achieved the first place in the A/B list. Compared with the officially provided baseline, an increase of 20 percentage points. Ubert can not only complete common extraction tasks such as entity recognition and event extraction, but also classification tasks such as news classification and natural language reasoning.
Usage
安装我们的 fengshen 框架,我们暂且提供如下方式安装
```python
git clone https://github.com/IDEA-CCNL/Fengshenbang-LM.git
cd Fengshenbang-LM
pip install --editable ./
一键运行下面代码得到预测结果, 你可以任意修改示例 text 和要抽取的 entity_type,体验一下 Zero-Shot 性能
import argparse
from fengshen import UbertPiplines
total_parser = argparse.ArgumentParser("TASK NAME")
total_parser = UbertPiplines.piplines_args(total_parser)
args = total_parser.parse_args()
test_data=[
{
"task_type": "抽取任务",
"subtask_type": "实体识别",
"text": "这也让很多业主据此认为,雅清苑是政府公务员挤对了国家的经适房政策。",
"choices": [
{"entity_type": "小区名字"},
{"entity_type": "岗位职责"}
],
"id": 0}
]
model = UbertPiplines(args)
result = model.predict(test_data)
for line in result:
print(line)
Scores on downstream chinese tasks
Model | ASAP-SENT | ASAP-ASPECT | ChnSentiCorp |
---|---|---|---|
Erlangshen-Roberta-110M-Sentiment | 97.77 | 97.31 | 96.61 |
Erlangshen-Roberta-330M-Sentiment | 97.9 | 97.51 | 96.66 |
Erlangshen-MegatronBert-1.3B-Sentiment | 98.1 | 97.8 | 97 |
Citation
If you find the resource is useful, please cite the following website in your paper.
@misc{Fengshenbang-LM,
title={Fengshenbang-LM},
author={IDEA-CCNL},
year={2021},
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}