Update README.md
Browse files
README.md
CHANGED
@@ -21,6 +21,19 @@ KAERS-BERT is a domain-specific Korean BERT model specialized for clinical text
|
|
21 |
|
22 |
The model is specifically designed to handle clinical texts where code-switching between Korean and English is frequent, making it particularly effective for processing medical terms and abbreviations in a bilingual context.
|
23 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
24 |
## Key Features
|
25 |
|
26 |
- Specialized in clinical and pharmaceutical domain text
|
|
|
21 |
|
22 |
The model is specifically designed to handle clinical texts where code-switching between Korean and English is frequent, making it particularly effective for processing medical terms and abbreviations in a bilingual context.
|
23 |
|
24 |
+
## Usage
|
25 |
+
You can load the model from HuggingFace Hub while using the local tokenizer:
|
26 |
+
|
27 |
+
```python```
|
28 |
+
from transformers import BertForPreTraining
|
29 |
+
from tokenization_kobert import KoBERTTokenizer
|
30 |
+
|
31 |
+
# Load model from HuggingFace
|
32 |
+
model = BertForPreTraining.from_pretrained("kimsiun/kaers-bert-241101")
|
33 |
+
|
34 |
+
# Load tokenizer from local file
|
35 |
+
tokenizer = KoBERTTokenizer.from_pretrained('skt/kobert-base-v1')
|
36 |
+
|
37 |
## Key Features
|
38 |
|
39 |
- Specialized in clinical and pharmaceutical domain text
|