update README.md
Browse files
README.md
CHANGED
@@ -1,6 +1,7 @@
|
|
1 |
---
|
2 |
language: nl
|
3 |
license: mit
|
|
|
4 |
inference: false
|
5 |
---
|
6 |
|
@@ -21,21 +22,56 @@ Level | Meaning
|
|
21 |
The predictions generated by the model might sometimes be outside of the scale (e.g. 4.2); this is normal in a regression model.
|
22 |
|
23 |
## Intended uses and limitations
|
|
|
|
|
24 |
|
25 |
-
|
|
|
|
|
|
|
26 |
|
27 |
-
|
|
|
|
|
|
|
|
|
28 |
|
29 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
30 |
|
31 |
-
## Training
|
|
|
|
|
32 |
|
33 |
-
|
|
|
|
|
|
|
|
|
|
|
34 |
|
35 |
## Evaluation results
|
|
|
36 |
|
37 |
-
|
|
|
|
|
|
|
|
|
38 |
|
39 |
## Authors and references
|
|
|
|
|
40 |
|
41 |
-
|
|
|
|
1 |
---
|
2 |
language: nl
|
3 |
license: mit
|
4 |
+
pipeline_tag: text-classification
|
5 |
inference: false
|
6 |
---
|
7 |
|
|
|
22 |
The predictions generated by the model might sometimes be outside of the scale (e.g. 4.2); this is normal in a regression model.
|
23 |
|
24 |
## Intended uses and limitations
|
25 |
+
- The model was fine-tuned (trained, validated and tested) on medical records from the Amsterdam UMC (the two academic medical centers of Amsterdam). It might perform differently on text from a different hospital or text from non-hospital sources (e.g. GP records).
|
26 |
+
- The model was fine-tuned with the [Simple Transformers](https://simpletransformers.ai/) library. This library is based on Transformers but the model cannot be used directly with Transformers `pipeline` and classes; doing so would generate incorrect outputs. For this reason, the API on this page is disabled.
|
27 |
|
28 |
+
## How to use
|
29 |
+
To generate predictions with the model, use the [Simple Transformers](https://simpletransformers.ai/) library:
|
30 |
+
```
|
31 |
+
from simpletransformers.classification import ClassificationModel
|
32 |
|
33 |
+
model = ClassificationModel(
|
34 |
+
'roberta',
|
35 |
+
'CLTL/icf-levels-adm',
|
36 |
+
use_cuda=False,
|
37 |
+
)
|
38 |
|
39 |
+
example = 'Nu sinds 5-6 dagen progressieve benauwdheidsklachten (bij korte stukken lopen al kortademig), terwijl dit eerder niet zo was.'
|
40 |
+
_, raw_outputs = model.predict([example])
|
41 |
+
predictions = np.squeeze(raw_outputs)
|
42 |
+
```
|
43 |
+
The prediction on the example is:
|
44 |
+
```
|
45 |
+
2.26
|
46 |
+
```
|
47 |
+
The raw outputs look like this:
|
48 |
+
```
|
49 |
+
[[2.26074648]]
|
50 |
+
```
|
51 |
|
52 |
+
## Training data
|
53 |
+
- The training data consists of clinical notes from medical records (in Dutch) of the Amsterdam UMC. Due to privacy constraints, the data cannot be released.
|
54 |
+
- The annotation guidelines used for the project can be found [here](https://github.com/cltl/a-proof-zonmw/tree/main/resources/annotation_guidelines).
|
55 |
|
56 |
+
## Training procedure
|
57 |
+
The default training parameters of Simple Transformers were used, including:
|
58 |
+
- Optimizer: AdamW
|
59 |
+
- Learning rate: 4e-5
|
60 |
+
- Num train epochs: 1
|
61 |
+
- Train batch size: 8
|
62 |
|
63 |
## Evaluation results
|
64 |
+
The evaluation is done on a sentence-level (the classification unit) and on a note-level (the aggregated unit which is meaningful for the healthcare professionals).
|
65 |
|
66 |
+
| | Sentence-level | Note-level
|
67 |
+
|---|---|---
|
68 |
+
mean absolute error | 0.48 | 0.37
|
69 |
+
mean squared error | 0.55 | 0.34
|
70 |
+
root mean squared error | 0.74 | 0.58
|
71 |
|
72 |
## Authors and references
|
73 |
+
### Authors
|
74 |
+
Jenia Kim, Piek Vossen
|
75 |
|
76 |
+
### References
|
77 |
+
TBD
|