Hebrew Language Model

State-of-the-art RoBERTa language model for Hebrew.

How to use

from transformers import AutoModelForMaskedLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('HeNLP/HeRo')
model = AutoModelForMaskedLM.from_pretrained('HeNLP/HeRo'

# Tokenization Example:
# Tokenizing
tokenized_string = tokenizer('שלום לכולם')

# Decoding 
decoded_string = tokenizer.decode(tokenized_string ['input_ids'], skip_special_tokens=True)

Citing

If you use HeRo in your research, please cite HeRo: RoBERTa and Longformer Hebrew Language Models.

@article{shalumov2023hero,
      title={HeRo: RoBERTa and Longformer Hebrew Language Models}, 
      author={Vitaly Shalumov and Harel Haskey},
      year={2023},
      journal={arXiv:2304.11077},
}
Downloads last month
469
Safetensors
Model size
125M params
Tensor type
I64
·
F32
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for HeNLP/HeRo

Finetunes
1 model

Dataset used to train HeNLP/HeRo