File size: 1,116 Bytes
4a7767b 060c76f 4a7767b 408ff27 4a7767b 408ff27 4a7767b a1c712e 5d7acfa a1c712e d17dff2 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 |
---
language:
- uz
tags:
- transformers
- mit
- robert
- uzrobert
- uzbek
- cyrillic
- latin
license: apache-2.0
widget:
- text: "Kuchli yomg‘irlar tufayli bir qator <mask> kuchli sel oqishi kuzatildi."
example_title: "Latin script"
- text: "Алишер Навоий – улуғ ўзбек ва бошқа туркий халқларнинг <mask>, мутафаккири ва давлат арбоби бўлган."
example_title: "Cyrillic script"
---
<p><b>UzRoBerta model.</b>
Pre-prepared model in Uzbek (Cyrillic and latin script) to model the masked language and predict the next sentences.
<p><b>How to use.</b>
You can use this model directly with a pipeline for masked language modeling:
$$
from transformers import pipeline
unmasker = pipeline('fill-mask', model='coppercitylabs/uzbert-base-uncased')
unmasker("Алишер Навоий – улуғ ўзбек ва бошқа туркий халқларнинг [MASK], мутафаккири ва давлат арбоби бўлган.")
$$
<p><b>Training data.</b>
UzBERT model was pretrained on ≈2M news articles (≈3Gb).
|