metadata
language:
- zh
license: apache-2.0
tags:
- bert
- NLU
- Sentiment
- Chinese
inference: false
widget:
- text: 今天心情不好
Erlangshen-Ubert-110M, model (Chinese),one model of Fengshenbang-LM.
We collect 70+ datasets in the Chinese domain for finetune, with a total of 1065069 samples. Our model is mainly based on macbert
Ubert is a solution we proposed when we were doing the 2022 AIWIN World Artificial Intelligence Innovation Competition, and achieved the first place in the A/B list. Compared with the officially provided baseline, an increase of 20 percentage points. Ubert can not only complete common extraction tasks such as entity recognition and event extraction, but also classification tasks such as news classification and natural language reasoning.
more detail in our github
Usage
pip install fengshen
git clone https://github.com/IDEA-CCNL/Fengshenbang-LM.git
cd Fengshenbang-LM
pip install --editable ./
run the code
import argparse
from fengshen import UbertPiplines
total_parser = argparse.ArgumentParser("TASK NAME")
total_parser = UbertPiplines.piplines_args(total_parser)
args = total_parser.parse_args()
args.pretrained_model_path = "IDEA-CCNL/Erlangshen-Ubert-110M"
test_data=[
{
"task_type": "抽取任务",
"subtask_type": "实体识别",
"text": "这也让很多业主据此认为,雅清苑是政府公务员挤对了国家的经适房政策。",
"choices": [
{"entity_type": "小区名字"},
{"entity_type": "岗位职责"}
],
"id": 0}
]
model = UbertPiplines(args)
result = model.predict(test_data)
for line in result:
print(line)
If you find the resource is useful, please cite the following website in your paper.
@misc{Fengshenbang-LM,
title={Fengshenbang-LM},
author={IDEA-CCNL},
year={2021},
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}