File size: 6,916 Bytes
62096c7 284d906 3cd8e34 e14d20d 3cd8e34 62096c7 cdbf55a 7a65b67 514a9e7 7a65b67 3e7ff2b 514a9e7 3e7ff2b 514a9e7 3e7ff2b 514a9e7 7a65b67 514a9e7 7a65b67 514a9e7 7a65b67 514a9e7 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 |
---
language: multilingual
license: mit
widget:
- text: "and I cannot conceive the reafon why [MASK] hath"
- text: "Täkäläinen sanomalehdistö [MASK] erit - täin"
- text: "Det vore [MASK] häller nödvändigt att be"
- text: "Comme, à cette époque [MASK] était celle de la"
- text: "In [MASK] an atmosphärischen Nahrungsmitteln"
---
# hmBERT: Historical Multilingual Language Models for Named Entity Recognition
More information about our hmBERT model can be found in our new paper:
["hmBERT: Historical Multilingual Language Models for Named Entity Recognition"](https://arxiv.org/abs/2205.15575).
## Languages
Our Historic Language Models Zoo contains support for the following languages - incl. their training data source:
| Language | Training data | Size
| -------- | ------------- | ----
| German | [Europeana](http://www.europeana-newspapers.eu/) | 13-28GB (filtered)
| French | [Europeana](http://www.europeana-newspapers.eu/) | 11-31GB (filtered)
| English | [British Library](https://data.bl.uk/digbks/db14.html) | 24GB (year filtered)
| Finnish | [Europeana](http://www.europeana-newspapers.eu/) | 1.2GB
| Swedish | [Europeana](http://www.europeana-newspapers.eu/) | 1.1GB
## Smaller Models
We have also released smaller models for the multilingual model:
| Model identifier | Model Hub link
| ----------------------------------------------- | ---------------------------------------------------------------------------
| `dbmdz/bert-tiny-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-tiny-historic-multilingual-cased)
| `dbmdz/bert-mini-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-mini-historic-multilingual-cased)
| `dbmdz/bert-small-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-small-historic-multilingual-cased)
| `dbmdz/bert-medium-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-base-historic-multilingual-cased)
# Corpora Stats
## German Europeana Corpus
We provide some statistics using different thresholds of ocr confidences, in order to shrink down the corpus size
and use less-noisier data:
| OCR confidence | Size
| -------------- | ----
| **0.60** | 28GB
| 0.65 | 18GB
| 0.70 | 13GB
For the final corpus we use a OCR confidence of 0.6 (28GB). The following plot shows a tokens per year distribution:
![German Europeana Corpus Stats](stats/figures/german_europeana_corpus_stats.png)
## French Europeana Corpus
Like German, we use different ocr confidence thresholds:
| OCR confidence | Size
| -------------- | ----
| 0.60 | 31GB
| 0.65 | 27GB
| **0.70** | 27GB
| 0.75 | 23GB
| 0.80 | 11GB
For the final corpus we use a OCR confidence of 0.7 (27GB). The following plot shows a tokens per year distribution:
![French Europeana Corpus Stats](stats/figures/french_europeana_corpus_stats.png)
## British Library Corpus
Metadata is taken from [here](https://data.bl.uk/digbks/DB21.html). Stats incl. year filtering:
| Years | Size
| ----------------- | ----
| ALL | 24GB
| >= 1800 && < 1900 | 24GB
We use the year filtered variant. The following plot shows a tokens per year distribution:
![British Library Corpus Stats](stats/figures/bl_corpus_stats.png)
## Finnish Europeana Corpus
| OCR confidence | Size
| -------------- | ----
| 0.60 | 1.2GB
The following plot shows a tokens per year distribution:
![Finnish Europeana Corpus Stats](stats/figures/finnish_europeana_corpus_stats.png)
## Swedish Europeana Corpus
| OCR confidence | Size
| -------------- | ----
| 0.60 | 1.1GB
The following plot shows a tokens per year distribution:
![Swedish Europeana Corpus Stats](stats/figures/swedish_europeana_corpus_stats.png)
## All Corpora
The following plot shows a tokens per year distribution of the complete training corpus:
![All Corpora Stats](stats/figures/all_corpus_stats.png)
# Multilingual Vocab generation
For the first attempt, we use the first 10GB of each pretraining corpus. We upsample both Finnish and Swedish to ~10GB.
The following tables shows the exact size that is used for generating a 32k and 64k subword vocabs:
| Language | Size
| -------- | ----
| German | 10GB
| French | 10GB
| English | 10GB
| Finnish | 9.5GB
| Swedish | 9.7GB
We then calculate the subword fertility rate and portion of `[UNK]`s over the following NER corpora:
| Language | NER corpora
| -------- | ------------------
| German | CLEF-HIPE, NewsEye
| French | CLEF-HIPE, NewsEye
| English | CLEF-HIPE
| Finnish | NewsEye
| Swedish | NewsEye
Breakdown of subword fertility rate and unknown portion per language for the 32k vocab:
| Language | Subword fertility | Unknown portion
| -------- | ------------------ | ---------------
| German | 1.43 | 0.0004
| French | 1.25 | 0.0001
| English | 1.25 | 0.0
| Finnish | 1.69 | 0.0007
| Swedish | 1.43 | 0.0
Breakdown of subword fertility rate and unknown portion per language for the 64k vocab:
| Language | Subword fertility | Unknown portion
| -------- | ------------------ | ---------------
| German | 1.31 | 0.0004
| French | 1.16 | 0.0001
| English | 1.17 | 0.0
| Finnish | 1.54 | 0.0007
| Swedish | 1.32 | 0.0
# Final pretraining corpora
We upsample Swedish and Finnish to ~27GB. The final stats for all pretraining corpora can be seen here:
| Language | Size
| -------- | ----
| German | 28GB
| French | 27GB
| English | 24GB
| Finnish | 27GB
| Swedish | 27GB
Total size is 130GB.
# Pretraining
## Multilingual model
We train a multilingual BERT model using the 32k vocab with the official BERT implementation
on a v3-32 TPU using the following parameters:
```bash
python3 run_pretraining.py --input_file gs://histolectra/historic-multilingual-tfrecords/*.tfrecord \
--output_dir gs://histolectra/bert-base-historic-multilingual-cased \
--bert_config_file ./config.json \
--max_seq_length=512 \
--max_predictions_per_seq=75 \
--do_train=True \
--train_batch_size=128 \
--num_train_steps=3000000 \
--learning_rate=1e-4 \
--save_checkpoints_steps=100000 \
--keep_checkpoint_max=20 \
--use_tpu=True \
--tpu_name=electra-2 \
--num_tpu_cores=32
```
The following plot shows the pretraining loss curve:
![Training loss curve](stats/figures/pretraining_loss_historic-multilingual.png)
# Acknowledgments
Research supported with Cloud TPUs from Google's TPU Research Cloud (TRC) program, previously known as
TensorFlow Research Cloud (TFRC). Many thanks for providing access to the TRC ❤️
Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team,
it is possible to download both cased and uncased models from their S3 storage 🤗 |