Token Classification
GLiNER
PyTorch
English
NER
GLiNER
information extraction
encoder
entity recognition
modernbert
Ihor commited on
Commit
e7759b9
·
verified ·
1 Parent(s): 35a8fe2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -1
README.md CHANGED
@@ -26,7 +26,9 @@ Such architecture brings several advantages over uni-encoder GLiNER:
26
  * Faster inference if entity embeddings are preprocessed;
27
  * Better generalization to unseen entities;
28
 
29
- Utilization of ModernBERT uncovers up to 3 times better efficiency in comparison to DeBERTa-based models and context length up to 8,192 tokens while demonstrating comparable results.
 
 
30
 
31
  However, bi-encoder architecture has some drawbacks such as a lack of inter-label interactions that make it hard for the model to disambiguate semantically similar but contextually different entities.
32
 
 
26
  * Faster inference if entity embeddings are preprocessed;
27
  * Better generalization to unseen entities;
28
 
29
+ Utilization of ModernBERT uncovers up to 4 times better efficiency in comparison to DeBERTa-based models and context length up to 8,192 tokens while demonstrating comparable results.
30
+
31
+ ![inference time comparison](modernbert_inference_time.png "Inference time comparison")
32
 
33
  However, bi-encoder architecture has some drawbacks such as a lack of inter-label interactions that make it hard for the model to disambiguate semantically similar but contextually different entities.
34