Please use 'Bert' related tokenizer classes and 'Nezha' related model classes

NEZHA: Neural Contextualized Representation for Chinese Language Understanding Junqiu Wei, Xiaozhe Ren, Xiaoguang Li, Wenyong Huang, Yi Liao, Yasheng Wang, Jiashu Lin, Xin Jiang, Xiao Chen and Qun Liu.

The original checkpoints can be found here

Example Usage

from transformers import BertTokenizer, NezhaModel
tokenizer = BertTokenizer.from_pretrained("sijunhe/nezha-base-wwm")
model = NezhaModel.from_pretrained("sijunhe/nezha-base-wwm")
text = "我爱北京天安门"
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
Downloads last month
55
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.