numb3r3 commited on
Commit
d04d468
·
1 Parent(s): 7884ec1

chore: init readme

Browse files
Files changed (1) hide show
  1. README.md +24 -2
README.md CHANGED
@@ -20,7 +20,7 @@ tags:
20
 
21
  # jina-reranker-v1-turbo-en
22
 
23
- This model is designed for **blazing-fast** reranking while maintaining **competitive performance**. What's more, it leverages the power of our [JinaBERT](https://arxiv.org/abs/2310.19923) model as their foundation. JinaBERT itself is a unique variant of the BERT architecture that supports the symmetric bidirectional variant of [ALiBi](https://arxiv.org/abs/2108.12409). This allows `jina-reranker-v1-turbo-en` to process significantly longer sequences of text compared to other reranking models, up to an impressive **8,192** tokens.
24
 
25
  To achieve the remarkable speed, the `jina-reranker-v1-turbo-en` employ a technique called knowledge distillation. Here, a complex, but slower, model (like our original [jina-reranker-v1-base-en](https://jina.ai/reranker/)) acts as a teacher, condensing its knowledge into a smaller, faster student model. This student retains most of the teacher's knowledge, allowing it to deliver similar accuracy in a fraction of the time.
26
 
@@ -32,9 +32,11 @@ Here's a breakdown of the reranker models we provide:
32
  | [jina-reranker-v1-turbo-en](https://huggingface.co/jinaai/jina-reranker-v1-turbo-en) | 6 | 384 | 37.8 |
33
  | [jina-reranker-v1-tiny-en](https://huggingface.co/jinaai/jina-reranker-v1-tiny-en) | 4 | 384 | 33.0 |
34
 
 
 
35
  # Usage
36
 
37
- You can use Jina Reranker models directly from transformers package:
38
 
39
  ```python
40
  !pip install transformers
@@ -65,6 +67,26 @@ sentence_pairs = [[query, doc] for doc in documents]
65
  scores = model.compute_score(sentence_pairs)
66
  ```
67
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
68
  # Contact
69
 
70
  Join our [Discord community](https://discord.jina.ai/) and chat with other community members about ideas.
 
20
 
21
  # jina-reranker-v1-turbo-en
22
 
23
+ This model is designed for **blazing-fast** reranking while maintaining **competitive performance**. What's more, it leverages the power of our [JinaBERT](https://arxiv.org/abs/2310.19923) model as its foundation. `JinaBERT` itself is a unique variant of the BERT architecture that supports the symmetric bidirectional variant of [ALiBi](https://arxiv.org/abs/2108.12409). This allows `jina-reranker-v1-turbo-en` to process significantly longer sequences of text compared to other reranking models, up to an impressive **8,192** tokens.
24
 
25
  To achieve the remarkable speed, the `jina-reranker-v1-turbo-en` employ a technique called knowledge distillation. Here, a complex, but slower, model (like our original [jina-reranker-v1-base-en](https://jina.ai/reranker/)) acts as a teacher, condensing its knowledge into a smaller, faster student model. This student retains most of the teacher's knowledge, allowing it to deliver similar accuracy in a fraction of the time.
26
 
 
32
  | [jina-reranker-v1-turbo-en](https://huggingface.co/jinaai/jina-reranker-v1-turbo-en) | 6 | 384 | 37.8 |
33
  | [jina-reranker-v1-tiny-en](https://huggingface.co/jinaai/jina-reranker-v1-tiny-en) | 4 | 384 | 33.0 |
34
 
35
+ As you can see, the `jina-reranker-v1-turbo-en` offers a balanced approach with **6 layers** and **37.8 million** parameters. This translates to fast search and reranking while preserving a high degree of accuracy. The `jina-reranker-v1-tiny-en` prioritizes speed even further, achieving the fastest inference speeds with its **4-layer**, **33.0 million** parameter architecture. This makes it ideal for scenarios where absolute top accuracy is less crucial.
36
+
37
  # Usage
38
 
39
+ You can use `jina-reranker-v1-turbo-en` directly from transformers package:
40
 
41
  ```python
42
  !pip install transformers
 
67
  scores = model.compute_score(sentence_pairs)
68
  ```
69
 
70
+ # Evaluation
71
+
72
+ We evaluated Jina Reranker on 3 key benchmarks to ensure top-tier performance and search relevance.
73
+
74
+ | Model Name | NDCG@10 (17 BEIR datasets) | NDCG@10 (5 LoCo datasets) | Hit Rate (LlamaIndex RAG) |
75
+ | ---------------------------- | -------------------------- | ------------------------- | ------------------------- |
76
+ | `jina-rereanker-v1-base-en` | 52.45 | 87.31 | 85.53 |
77
+ | `jina-rereanker-v1-turbo-en` | 49.60 | 69.21 | 85.13 |
78
+ | `jina-rereanker-v1-tiny-en` | 48.54 | 70.29 | 85.00 |
79
+ | `mxbai-rerank-base-v1` | 49.19 | - | 82.50 |
80
+
81
+ | `mxbai-rerank-xsmall-v1` | 48.80 | - | 83.69 |
82
+ | `ms-marco-MiniLM-L-6-v2` | 48.64 | - | 82.63 |
83
+ | `ms-marco-MiniLM-L-4-v2` | 47.81 | - | 83.82 |
84
+ | `bge-reranker-base` | 47.89 | - | 83.03 |
85
+
86
+ For more details, please refer to our [benchmarking page](https://jina.ai/reranker/).
87
+
88
+ > `NDCG@10` is a measure of ranking quality, with higher scores indicating better search results. `Hit Rate` measures the percentage of relevant documents that appear in the top 10 search results.
89
+
90
  # Contact
91
 
92
  Join our [Discord community](https://discord.jina.ai/) and chat with other community members about ideas.