Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
Andrija
/
SRoBERTa-L
like
0
Fill-Mask
Transformers
PyTorch
oscar
srwac
leipzig
Croatian
Serbian
multilingual
roberta
masked-lm
Inference Endpoints
License:
apache-2.0
Model card
Files
Files and versions
Community
2
Train
Deploy
Use this model
refs/pr/2
SRoBERTa-L
3 contributors
History:
11 commits
SFconvertbot
Adding `safetensors` variant of this model
7bd19d0
over 1 year ago
.gitattributes
791 Bytes
Adding `safetensors` variant of this model
over 1 year ago
README.md
674 Bytes
Add "multilingual" to the language tag (#1)
about 2 years ago
config.json
612 Bytes
Roberta model called SRoBERTa trained on WOL dataset (OSCAR + Leipzig + srWac) for Serbian language. Attention heads distilled 6, batch size 64, group size 64, epochs 2, test split 0.05(~1mil groups)
over 3 years ago
merges.txt
694 kB
Initial commit of tokenizer
over 3 years ago
model.safetensors
322 MB
LFS
Adding `safetensors` variant of this model
over 1 year ago
pytorch_model.bin
pickle
Detected Pickle imports (4)
"collections.OrderedDict"
,
"torch.FloatStorage"
,
"torch.LongStorage"
,
"torch._utils._rebuild_tensor_v2"
What is a pickle import?
322 MB
LFS
Roberta model called SRoBERTa trained on WOL dataset (OSCAR + Leipzig + srWac) for Serbian language. Attention heads distilled 6, batch size 64, group size 64, epochs 2, test split 0.05(~1mil groups)
over 3 years ago
special_tokens_map.json
772 Bytes
Initial commit of tokenizer
over 3 years ago
tokenizer.json
1.81 MB
Initial commit of tokenizer
over 3 years ago
tokenizer_config.json
1.09 kB
Initial commit of tokenizer
over 3 years ago
training_args.bin
pickle
Detected Pickle imports (4)
"transformers.trainer_utils.SchedulerType"
,
"transformers.training_args.TrainingArguments"
,
"transformers.trainer_utils.IntervalStrategy"
,
"torch.device"
How to fix it?
2.67 kB
LFS
Roberta model called SRoBERTa trained on WOL dataset (OSCAR + Leipzig + srWac) for Serbian language. Attention heads distilled 6, batch size 64, group size 64, epochs 2, test split 0.05(~1mil groups)
over 3 years ago
vocab.json
1.02 MB
Initial commit of tokenizer
over 3 years ago