Andresckamilo commited on
Commit
1b05214
·
verified ·
1 Parent(s): ad3e134

Add new SentenceTransformer model.

Browse files
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 768,
3
+ "pooling_mode_cls_token": true,
4
+ "pooling_mode_mean_tokens": false,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
README.md ADDED
@@ -0,0 +1,789 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: apache-2.0
5
+ library_name: sentence-transformers
6
+ tags:
7
+ - sentence-transformers
8
+ - sentence-similarity
9
+ - feature-extraction
10
+ - dataset_size:1K<n<10K
11
+ - loss:MatryoshkaLoss
12
+ - loss:MultipleNegativesRankingLoss
13
+ base_model: BAAI/bge-base-en-v1.5
14
+ metrics:
15
+ - cosine_accuracy@1
16
+ - cosine_accuracy@3
17
+ - cosine_accuracy@5
18
+ - cosine_accuracy@10
19
+ - cosine_precision@1
20
+ - cosine_precision@3
21
+ - cosine_precision@5
22
+ - cosine_precision@10
23
+ - cosine_recall@1
24
+ - cosine_recall@3
25
+ - cosine_recall@5
26
+ - cosine_recall@10
27
+ - cosine_ndcg@10
28
+ - cosine_mrr@10
29
+ - cosine_map@100
30
+ widget:
31
+ - source_sentence: What types of industries does TTI service?
32
+ sentences:
33
+ - What types of businesses does HPE serve?
34
+ - How much did the company's revenues decrease in 2023 compared to 2022?
35
+ - By what percentage did the quarterly cash dividend increase on January 26, 2023?
36
+ - source_sentence: What does ITEM 8 in Form 10-K refer to?
37
+ sentences:
38
+ - ITEM 8 in Form 10-K refers to the Financial Statements and Supplementary Data.
39
+ - UnitedHealth Group reported net earnings of $23,144 million in 2023.
40
+ - What factors contributed to the decrease in automotive leasing revenue in 2023?
41
+ - source_sentence: What are consolidated financial statements?
42
+ sentences:
43
+ - The report on the Consolidated Financial Statements is dated February 16, 2024.
44
+ - How much did the foreclosed properties decrease in value during 2023?
45
+ - What was Chipotle Mexican Grill's net income in 2023?
46
+ - source_sentence: What were the total product sales in 2023?
47
+ sentences:
48
+ - Total product sales in 2023 amounted to $27,305 million.
49
+ - How does AutoZone manage its foreign operations in terms of currency?
50
+ - What restrictions does the Bank Holding Company Act impose on JPMorgan Chase?
51
+ - source_sentence: What is the global presence of Lubrizol?
52
+ sentences:
53
+ - How does The Coca-Cola Company distribute its beverage products globally?
54
+ - What are the two operating segments of NVIDIA as mentioned in the text?
55
+ - How much did Delta Air Lines spend on debt and finance lease obligations in 2023?
56
+ pipeline_tag: sentence-similarity
57
+ model-index:
58
+ - name: BGE base Financial Matryoshka
59
+ results:
60
+ - task:
61
+ type: information-retrieval
62
+ name: Information Retrieval
63
+ dataset:
64
+ name: dim 768
65
+ type: dim_768
66
+ metrics:
67
+ - type: cosine_accuracy@1
68
+ value: 0.6957142857142857
69
+ name: Cosine Accuracy@1
70
+ - type: cosine_accuracy@3
71
+ value: 0.8342857142857143
72
+ name: Cosine Accuracy@3
73
+ - type: cosine_accuracy@5
74
+ value: 0.8628571428571429
75
+ name: Cosine Accuracy@5
76
+ - type: cosine_accuracy@10
77
+ value: 0.9085714285714286
78
+ name: Cosine Accuracy@10
79
+ - type: cosine_precision@1
80
+ value: 0.6957142857142857
81
+ name: Cosine Precision@1
82
+ - type: cosine_precision@3
83
+ value: 0.2780952380952381
84
+ name: Cosine Precision@3
85
+ - type: cosine_precision@5
86
+ value: 0.17257142857142854
87
+ name: Cosine Precision@5
88
+ - type: cosine_precision@10
89
+ value: 0.09085714285714284
90
+ name: Cosine Precision@10
91
+ - type: cosine_recall@1
92
+ value: 0.6957142857142857
93
+ name: Cosine Recall@1
94
+ - type: cosine_recall@3
95
+ value: 0.8342857142857143
96
+ name: Cosine Recall@3
97
+ - type: cosine_recall@5
98
+ value: 0.8628571428571429
99
+ name: Cosine Recall@5
100
+ - type: cosine_recall@10
101
+ value: 0.9085714285714286
102
+ name: Cosine Recall@10
103
+ - type: cosine_ndcg@10
104
+ value: 0.8045138729797765
105
+ name: Cosine Ndcg@10
106
+ - type: cosine_mrr@10
107
+ value: 0.7709591836734694
108
+ name: Cosine Mrr@10
109
+ - type: cosine_map@100
110
+ value: 0.7746687336147619
111
+ name: Cosine Map@100
112
+ - task:
113
+ type: information-retrieval
114
+ name: Information Retrieval
115
+ dataset:
116
+ name: dim 512
117
+ type: dim_512
118
+ metrics:
119
+ - type: cosine_accuracy@1
120
+ value: 0.7
121
+ name: Cosine Accuracy@1
122
+ - type: cosine_accuracy@3
123
+ value: 0.8271428571428572
124
+ name: Cosine Accuracy@3
125
+ - type: cosine_accuracy@5
126
+ value: 0.8642857142857143
127
+ name: Cosine Accuracy@5
128
+ - type: cosine_accuracy@10
129
+ value: 0.9157142857142857
130
+ name: Cosine Accuracy@10
131
+ - type: cosine_precision@1
132
+ value: 0.7
133
+ name: Cosine Precision@1
134
+ - type: cosine_precision@3
135
+ value: 0.2757142857142857
136
+ name: Cosine Precision@3
137
+ - type: cosine_precision@5
138
+ value: 0.17285714285714285
139
+ name: Cosine Precision@5
140
+ - type: cosine_precision@10
141
+ value: 0.09157142857142857
142
+ name: Cosine Precision@10
143
+ - type: cosine_recall@1
144
+ value: 0.7
145
+ name: Cosine Recall@1
146
+ - type: cosine_recall@3
147
+ value: 0.8271428571428572
148
+ name: Cosine Recall@3
149
+ - type: cosine_recall@5
150
+ value: 0.8642857142857143
151
+ name: Cosine Recall@5
152
+ - type: cosine_recall@10
153
+ value: 0.9157142857142857
154
+ name: Cosine Recall@10
155
+ - type: cosine_ndcg@10
156
+ value: 0.807258910509631
157
+ name: Cosine Ndcg@10
158
+ - type: cosine_mrr@10
159
+ value: 0.7726218820861678
160
+ name: Cosine Mrr@10
161
+ - type: cosine_map@100
162
+ value: 0.7757170101327764
163
+ name: Cosine Map@100
164
+ - task:
165
+ type: information-retrieval
166
+ name: Information Retrieval
167
+ dataset:
168
+ name: dim 256
169
+ type: dim_256
170
+ metrics:
171
+ - type: cosine_accuracy@1
172
+ value: 0.6928571428571428
173
+ name: Cosine Accuracy@1
174
+ - type: cosine_accuracy@3
175
+ value: 0.82
176
+ name: Cosine Accuracy@3
177
+ - type: cosine_accuracy@5
178
+ value: 0.8585714285714285
179
+ name: Cosine Accuracy@5
180
+ - type: cosine_accuracy@10
181
+ value: 0.9028571428571428
182
+ name: Cosine Accuracy@10
183
+ - type: cosine_precision@1
184
+ value: 0.6928571428571428
185
+ name: Cosine Precision@1
186
+ - type: cosine_precision@3
187
+ value: 0.2733333333333334
188
+ name: Cosine Precision@3
189
+ - type: cosine_precision@5
190
+ value: 0.1717142857142857
191
+ name: Cosine Precision@5
192
+ - type: cosine_precision@10
193
+ value: 0.09028571428571427
194
+ name: Cosine Precision@10
195
+ - type: cosine_recall@1
196
+ value: 0.6928571428571428
197
+ name: Cosine Recall@1
198
+ - type: cosine_recall@3
199
+ value: 0.82
200
+ name: Cosine Recall@3
201
+ - type: cosine_recall@5
202
+ value: 0.8585714285714285
203
+ name: Cosine Recall@5
204
+ - type: cosine_recall@10
205
+ value: 0.9028571428571428
206
+ name: Cosine Recall@10
207
+ - type: cosine_ndcg@10
208
+ value: 0.7979490809476271
209
+ name: Cosine Ndcg@10
210
+ - type: cosine_mrr@10
211
+ value: 0.7643027210884353
212
+ name: Cosine Mrr@10
213
+ - type: cosine_map@100
214
+ value: 0.7684617620062486
215
+ name: Cosine Map@100
216
+ - task:
217
+ type: information-retrieval
218
+ name: Information Retrieval
219
+ dataset:
220
+ name: dim 128
221
+ type: dim_128
222
+ metrics:
223
+ - type: cosine_accuracy@1
224
+ value: 0.6857142857142857
225
+ name: Cosine Accuracy@1
226
+ - type: cosine_accuracy@3
227
+ value: 0.81
228
+ name: Cosine Accuracy@3
229
+ - type: cosine_accuracy@5
230
+ value: 0.8542857142857143
231
+ name: Cosine Accuracy@5
232
+ - type: cosine_accuracy@10
233
+ value: 0.89
234
+ name: Cosine Accuracy@10
235
+ - type: cosine_precision@1
236
+ value: 0.6857142857142857
237
+ name: Cosine Precision@1
238
+ - type: cosine_precision@3
239
+ value: 0.27
240
+ name: Cosine Precision@3
241
+ - type: cosine_precision@5
242
+ value: 0.17085714285714282
243
+ name: Cosine Precision@5
244
+ - type: cosine_precision@10
245
+ value: 0.089
246
+ name: Cosine Precision@10
247
+ - type: cosine_recall@1
248
+ value: 0.6857142857142857
249
+ name: Cosine Recall@1
250
+ - type: cosine_recall@3
251
+ value: 0.81
252
+ name: Cosine Recall@3
253
+ - type: cosine_recall@5
254
+ value: 0.8542857142857143
255
+ name: Cosine Recall@5
256
+ - type: cosine_recall@10
257
+ value: 0.89
258
+ name: Cosine Recall@10
259
+ - type: cosine_ndcg@10
260
+ value: 0.7877753635329912
261
+ name: Cosine Ndcg@10
262
+ - type: cosine_mrr@10
263
+ value: 0.7549472789115641
264
+ name: Cosine Mrr@10
265
+ - type: cosine_map@100
266
+ value: 0.7596045003108374
267
+ name: Cosine Map@100
268
+ - task:
269
+ type: information-retrieval
270
+ name: Information Retrieval
271
+ dataset:
272
+ name: dim 64
273
+ type: dim_64
274
+ metrics:
275
+ - type: cosine_accuracy@1
276
+ value: 0.6528571428571428
277
+ name: Cosine Accuracy@1
278
+ - type: cosine_accuracy@3
279
+ value: 0.7571428571428571
280
+ name: Cosine Accuracy@3
281
+ - type: cosine_accuracy@5
282
+ value: 0.8185714285714286
283
+ name: Cosine Accuracy@5
284
+ - type: cosine_accuracy@10
285
+ value: 0.8685714285714285
286
+ name: Cosine Accuracy@10
287
+ - type: cosine_precision@1
288
+ value: 0.6528571428571428
289
+ name: Cosine Precision@1
290
+ - type: cosine_precision@3
291
+ value: 0.2523809523809524
292
+ name: Cosine Precision@3
293
+ - type: cosine_precision@5
294
+ value: 0.1637142857142857
295
+ name: Cosine Precision@5
296
+ - type: cosine_precision@10
297
+ value: 0.08685714285714284
298
+ name: Cosine Precision@10
299
+ - type: cosine_recall@1
300
+ value: 0.6528571428571428
301
+ name: Cosine Recall@1
302
+ - type: cosine_recall@3
303
+ value: 0.7571428571428571
304
+ name: Cosine Recall@3
305
+ - type: cosine_recall@5
306
+ value: 0.8185714285714286
307
+ name: Cosine Recall@5
308
+ - type: cosine_recall@10
309
+ value: 0.8685714285714285
310
+ name: Cosine Recall@10
311
+ - type: cosine_ndcg@10
312
+ value: 0.7557078446701566
313
+ name: Cosine Ndcg@10
314
+ - type: cosine_mrr@10
315
+ value: 0.7201400226757368
316
+ name: Cosine Mrr@10
317
+ - type: cosine_map@100
318
+ value: 0.7249497855774768
319
+ name: Cosine Map@100
320
+ ---
321
+
322
+ # BGE base Financial Matryoshka
323
+
324
+ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
325
+
326
+ ## Model Details
327
+
328
+ ### Model Description
329
+ - **Model Type:** Sentence Transformer
330
+ - **Base model:** [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) <!-- at revision a5beb1e3e68b9ab74eb54cfd186867f64f240e1a -->
331
+ - **Maximum Sequence Length:** 512 tokens
332
+ - **Output Dimensionality:** 768 tokens
333
+ - **Similarity Function:** Cosine Similarity
334
+ <!-- - **Training Dataset:** Unknown -->
335
+ - **Language:** en
336
+ - **License:** apache-2.0
337
+
338
+ ### Model Sources
339
+
340
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
341
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
342
+ - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
343
+
344
+ ### Full Model Architecture
345
+
346
+ ```
347
+ SentenceTransformer(
348
+ (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
349
+ (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
350
+ (2): Normalize()
351
+ )
352
+ ```
353
+
354
+ ## Usage
355
+
356
+ ### Direct Usage (Sentence Transformers)
357
+
358
+ First install the Sentence Transformers library:
359
+
360
+ ```bash
361
+ pip install -U sentence-transformers
362
+ ```
363
+
364
+ Then you can load this model and run inference.
365
+ ```python
366
+ from sentence_transformers import SentenceTransformer
367
+
368
+ # Download from the 🤗 Hub
369
+ model = SentenceTransformer("Andresckamilo/bge-base-financial-matryoshka")
370
+ # Run inference
371
+ sentences = [
372
+ 'What is the global presence of Lubrizol?',
373
+ 'How does The Coca-Cola Company distribute its beverage products globally?',
374
+ 'What are the two operating segments of NVIDIA as mentioned in the text?',
375
+ ]
376
+ embeddings = model.encode(sentences)
377
+ print(embeddings.shape)
378
+ # [3, 768]
379
+
380
+ # Get the similarity scores for the embeddings
381
+ similarities = model.similarity(embeddings, embeddings)
382
+ print(similarities.shape)
383
+ # [3, 3]
384
+ ```
385
+
386
+ <!--
387
+ ### Direct Usage (Transformers)
388
+
389
+ <details><summary>Click to see the direct usage in Transformers</summary>
390
+
391
+ </details>
392
+ -->
393
+
394
+ <!--
395
+ ### Downstream Usage (Sentence Transformers)
396
+
397
+ You can finetune this model on your own dataset.
398
+
399
+ <details><summary>Click to expand</summary>
400
+
401
+ </details>
402
+ -->
403
+
404
+ <!--
405
+ ### Out-of-Scope Use
406
+
407
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
408
+ -->
409
+
410
+ ## Evaluation
411
+
412
+ ### Metrics
413
+
414
+ #### Information Retrieval
415
+ * Dataset: `dim_768`
416
+ * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
417
+
418
+ | Metric | Value |
419
+ |:--------------------|:-----------|
420
+ | cosine_accuracy@1 | 0.6957 |
421
+ | cosine_accuracy@3 | 0.8343 |
422
+ | cosine_accuracy@5 | 0.8629 |
423
+ | cosine_accuracy@10 | 0.9086 |
424
+ | cosine_precision@1 | 0.6957 |
425
+ | cosine_precision@3 | 0.2781 |
426
+ | cosine_precision@5 | 0.1726 |
427
+ | cosine_precision@10 | 0.0909 |
428
+ | cosine_recall@1 | 0.6957 |
429
+ | cosine_recall@3 | 0.8343 |
430
+ | cosine_recall@5 | 0.8629 |
431
+ | cosine_recall@10 | 0.9086 |
432
+ | cosine_ndcg@10 | 0.8045 |
433
+ | cosine_mrr@10 | 0.771 |
434
+ | **cosine_map@100** | **0.7747** |
435
+
436
+ #### Information Retrieval
437
+ * Dataset: `dim_512`
438
+ * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
439
+
440
+ | Metric | Value |
441
+ |:--------------------|:-----------|
442
+ | cosine_accuracy@1 | 0.7 |
443
+ | cosine_accuracy@3 | 0.8271 |
444
+ | cosine_accuracy@5 | 0.8643 |
445
+ | cosine_accuracy@10 | 0.9157 |
446
+ | cosine_precision@1 | 0.7 |
447
+ | cosine_precision@3 | 0.2757 |
448
+ | cosine_precision@5 | 0.1729 |
449
+ | cosine_precision@10 | 0.0916 |
450
+ | cosine_recall@1 | 0.7 |
451
+ | cosine_recall@3 | 0.8271 |
452
+ | cosine_recall@5 | 0.8643 |
453
+ | cosine_recall@10 | 0.9157 |
454
+ | cosine_ndcg@10 | 0.8073 |
455
+ | cosine_mrr@10 | 0.7726 |
456
+ | **cosine_map@100** | **0.7757** |
457
+
458
+ #### Information Retrieval
459
+ * Dataset: `dim_256`
460
+ * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
461
+
462
+ | Metric | Value |
463
+ |:--------------------|:-----------|
464
+ | cosine_accuracy@1 | 0.6929 |
465
+ | cosine_accuracy@3 | 0.82 |
466
+ | cosine_accuracy@5 | 0.8586 |
467
+ | cosine_accuracy@10 | 0.9029 |
468
+ | cosine_precision@1 | 0.6929 |
469
+ | cosine_precision@3 | 0.2733 |
470
+ | cosine_precision@5 | 0.1717 |
471
+ | cosine_precision@10 | 0.0903 |
472
+ | cosine_recall@1 | 0.6929 |
473
+ | cosine_recall@3 | 0.82 |
474
+ | cosine_recall@5 | 0.8586 |
475
+ | cosine_recall@10 | 0.9029 |
476
+ | cosine_ndcg@10 | 0.7979 |
477
+ | cosine_mrr@10 | 0.7643 |
478
+ | **cosine_map@100** | **0.7685** |
479
+
480
+ #### Information Retrieval
481
+ * Dataset: `dim_128`
482
+ * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
483
+
484
+ | Metric | Value |
485
+ |:--------------------|:-----------|
486
+ | cosine_accuracy@1 | 0.6857 |
487
+ | cosine_accuracy@3 | 0.81 |
488
+ | cosine_accuracy@5 | 0.8543 |
489
+ | cosine_accuracy@10 | 0.89 |
490
+ | cosine_precision@1 | 0.6857 |
491
+ | cosine_precision@3 | 0.27 |
492
+ | cosine_precision@5 | 0.1709 |
493
+ | cosine_precision@10 | 0.089 |
494
+ | cosine_recall@1 | 0.6857 |
495
+ | cosine_recall@3 | 0.81 |
496
+ | cosine_recall@5 | 0.8543 |
497
+ | cosine_recall@10 | 0.89 |
498
+ | cosine_ndcg@10 | 0.7878 |
499
+ | cosine_mrr@10 | 0.7549 |
500
+ | **cosine_map@100** | **0.7596** |
501
+
502
+ #### Information Retrieval
503
+ * Dataset: `dim_64`
504
+ * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
505
+
506
+ | Metric | Value |
507
+ |:--------------------|:-----------|
508
+ | cosine_accuracy@1 | 0.6529 |
509
+ | cosine_accuracy@3 | 0.7571 |
510
+ | cosine_accuracy@5 | 0.8186 |
511
+ | cosine_accuracy@10 | 0.8686 |
512
+ | cosine_precision@1 | 0.6529 |
513
+ | cosine_precision@3 | 0.2524 |
514
+ | cosine_precision@5 | 0.1637 |
515
+ | cosine_precision@10 | 0.0869 |
516
+ | cosine_recall@1 | 0.6529 |
517
+ | cosine_recall@3 | 0.7571 |
518
+ | cosine_recall@5 | 0.8186 |
519
+ | cosine_recall@10 | 0.8686 |
520
+ | cosine_ndcg@10 | 0.7557 |
521
+ | cosine_mrr@10 | 0.7201 |
522
+ | **cosine_map@100** | **0.7249** |
523
+
524
+ <!--
525
+ ## Bias, Risks and Limitations
526
+
527
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
528
+ -->
529
+
530
+ <!--
531
+ ### Recommendations
532
+
533
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
534
+ -->
535
+
536
+ ## Training Details
537
+
538
+ ### Training Dataset
539
+
540
+ #### Unnamed Dataset
541
+
542
+
543
+ * Size: 6,300 training samples
544
+ * Columns: <code>positive</code> and <code>anchor</code>
545
+ * Approximate statistics based on the first 1000 samples:
546
+ | | positive | anchor |
547
+ |:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
548
+ | type | string | string |
549
+ | details | <ul><li>min: 6 tokens</li><li>mean: 45.39 tokens</li><li>max: 371 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 20.23 tokens</li><li>max: 45 tokens</li></ul> |
550
+ * Samples:
551
+ | positive | anchor |
552
+ |:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------|
553
+ | <code>Chubb mitigates exposure to climate change risk by ceding catastrophe risk in our insurance portfolio through both reinsurance and capital markets, and our investment portfolio through the diversification of risk, industry, location, type and duration of security.</code> | <code>How does Chubb respond to the risks associated with climate change?</code> |
554
+ | <code>Item 8 of Part IV in the Annual Report on Form 10-K details the consolidated financial statements and accompanying notes.</code> | <code>What documents are detailed in Item 8 of Part IV of the Annual Report on Form 10-K?</code> |
555
+ | <code>While the outcome of this matter cannot be determined at this time, it is not currently expected to have a material adverse impact on our business.</code> | <code>Is the outcome of the investigation into Tesla's waste segregation practices currently determinable?</code> |
556
+ * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
557
+ ```json
558
+ {
559
+ "loss": "MultipleNegativesRankingLoss",
560
+ "matryoshka_dims": [
561
+ 768,
562
+ 512,
563
+ 256,
564
+ 128,
565
+ 64
566
+ ],
567
+ "matryoshka_weights": [
568
+ 1,
569
+ 1,
570
+ 1,
571
+ 1,
572
+ 1
573
+ ],
574
+ "n_dims_per_step": -1
575
+ }
576
+ ```
577
+
578
+ ### Training Hyperparameters
579
+ #### Non-Default Hyperparameters
580
+
581
+ - `eval_strategy`: epoch
582
+ - `per_device_train_batch_size`: 32
583
+ - `per_device_eval_batch_size`: 16
584
+ - `gradient_accumulation_steps`: 16
585
+ - `learning_rate`: 2e-05
586
+ - `num_train_epochs`: 4
587
+ - `lr_scheduler_type`: cosine
588
+ - `warmup_ratio`: 0.1
589
+ - `bf16`: True
590
+ - `tf32`: True
591
+ - `load_best_model_at_end`: True
592
+ - `optim`: adamw_torch_fused
593
+ - `batch_sampler`: no_duplicates
594
+
595
+ #### All Hyperparameters
596
+ <details><summary>Click to expand</summary>
597
+
598
+ - `overwrite_output_dir`: False
599
+ - `do_predict`: False
600
+ - `eval_strategy`: epoch
601
+ - `prediction_loss_only`: True
602
+ - `per_device_train_batch_size`: 32
603
+ - `per_device_eval_batch_size`: 16
604
+ - `per_gpu_train_batch_size`: None
605
+ - `per_gpu_eval_batch_size`: None
606
+ - `gradient_accumulation_steps`: 16
607
+ - `eval_accumulation_steps`: None
608
+ - `learning_rate`: 2e-05
609
+ - `weight_decay`: 0.0
610
+ - `adam_beta1`: 0.9
611
+ - `adam_beta2`: 0.999
612
+ - `adam_epsilon`: 1e-08
613
+ - `max_grad_norm`: 1.0
614
+ - `num_train_epochs`: 4
615
+ - `max_steps`: -1
616
+ - `lr_scheduler_type`: cosine
617
+ - `lr_scheduler_kwargs`: {}
618
+ - `warmup_ratio`: 0.1
619
+ - `warmup_steps`: 0
620
+ - `log_level`: passive
621
+ - `log_level_replica`: warning
622
+ - `log_on_each_node`: True
623
+ - `logging_nan_inf_filter`: True
624
+ - `save_safetensors`: True
625
+ - `save_on_each_node`: False
626
+ - `save_only_model`: False
627
+ - `restore_callback_states_from_checkpoint`: False
628
+ - `no_cuda`: False
629
+ - `use_cpu`: False
630
+ - `use_mps_device`: False
631
+ - `seed`: 42
632
+ - `data_seed`: None
633
+ - `jit_mode_eval`: False
634
+ - `use_ipex`: False
635
+ - `bf16`: True
636
+ - `fp16`: False
637
+ - `fp16_opt_level`: O1
638
+ - `half_precision_backend`: auto
639
+ - `bf16_full_eval`: False
640
+ - `fp16_full_eval`: False
641
+ - `tf32`: True
642
+ - `local_rank`: 0
643
+ - `ddp_backend`: None
644
+ - `tpu_num_cores`: None
645
+ - `tpu_metrics_debug`: False
646
+ - `debug`: []
647
+ - `dataloader_drop_last`: False
648
+ - `dataloader_num_workers`: 0
649
+ - `dataloader_prefetch_factor`: None
650
+ - `past_index`: -1
651
+ - `disable_tqdm`: False
652
+ - `remove_unused_columns`: True
653
+ - `label_names`: None
654
+ - `load_best_model_at_end`: True
655
+ - `ignore_data_skip`: False
656
+ - `fsdp`: []
657
+ - `fsdp_min_num_params`: 0
658
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
659
+ - `fsdp_transformer_layer_cls_to_wrap`: None
660
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
661
+ - `deepspeed`: None
662
+ - `label_smoothing_factor`: 0.0
663
+ - `optim`: adamw_torch_fused
664
+ - `optim_args`: None
665
+ - `adafactor`: False
666
+ - `group_by_length`: False
667
+ - `length_column_name`: length
668
+ - `ddp_find_unused_parameters`: None
669
+ - `ddp_bucket_cap_mb`: None
670
+ - `ddp_broadcast_buffers`: False
671
+ - `dataloader_pin_memory`: True
672
+ - `dataloader_persistent_workers`: False
673
+ - `skip_memory_metrics`: True
674
+ - `use_legacy_prediction_loop`: False
675
+ - `push_to_hub`: False
676
+ - `resume_from_checkpoint`: None
677
+ - `hub_model_id`: None
678
+ - `hub_strategy`: every_save
679
+ - `hub_private_repo`: False
680
+ - `hub_always_push`: False
681
+ - `gradient_checkpointing`: False
682
+ - `gradient_checkpointing_kwargs`: None
683
+ - `include_inputs_for_metrics`: False
684
+ - `eval_do_concat_batches`: True
685
+ - `fp16_backend`: auto
686
+ - `push_to_hub_model_id`: None
687
+ - `push_to_hub_organization`: None
688
+ - `mp_parameters`:
689
+ - `auto_find_batch_size`: False
690
+ - `full_determinism`: False
691
+ - `torchdynamo`: None
692
+ - `ray_scope`: last
693
+ - `ddp_timeout`: 1800
694
+ - `torch_compile`: False
695
+ - `torch_compile_backend`: None
696
+ - `torch_compile_mode`: None
697
+ - `dispatch_batches`: None
698
+ - `split_batches`: None
699
+ - `include_tokens_per_second`: False
700
+ - `include_num_input_tokens_seen`: False
701
+ - `neftune_noise_alpha`: None
702
+ - `optim_target_modules`: None
703
+ - `batch_eval_metrics`: False
704
+ - `batch_sampler`: no_duplicates
705
+ - `multi_dataset_batch_sampler`: proportional
706
+
707
+ </details>
708
+
709
+ ### Training Logs
710
+ | Epoch | Step | Training Loss | dim_128_cosine_map@100 | dim_256_cosine_map@100 | dim_512_cosine_map@100 | dim_64_cosine_map@100 | dim_768_cosine_map@100 |
711
+ |:----------:|:------:|:-------------:|:----------------------:|:----------------------:|:----------------------:|:---------------------:|:----------------------:|
712
+ | 0.8122 | 10 | 1.521 | - | - | - | - | - |
713
+ | 0.9746 | 12 | - | 0.7434 | 0.7579 | 0.7641 | 0.6994 | 0.7678 |
714
+ | 1.6244 | 20 | 0.6597 | - | - | - | - | - |
715
+ | 1.9492 | 24 | - | 0.7583 | 0.7628 | 0.7726 | 0.7219 | 0.7735 |
716
+ | 2.4365 | 30 | 0.4472 | - | - | - | - | - |
717
+ | 2.9239 | 36 | - | 0.7578 | 0.7661 | 0.7747 | 0.7251 | 0.7753 |
718
+ | 3.2487 | 40 | 0.3865 | - | - | - | - | - |
719
+ | **3.8985** | **48** | **-** | **0.7596** | **0.7685** | **0.7757** | **0.7249** | **0.7747** |
720
+
721
+ * The bold row denotes the saved checkpoint.
722
+
723
+ ### Framework Versions
724
+ - Python: 3.10.14
725
+ - Sentence Transformers: 3.0.0
726
+ - Transformers: 4.41.2
727
+ - PyTorch: 2.1.2+cu121
728
+ - Accelerate: 0.30.1
729
+ - Datasets: 2.19.1
730
+ - Tokenizers: 0.19.1
731
+
732
+ ## Citation
733
+
734
+ ### BibTeX
735
+
736
+ #### Sentence Transformers
737
+ ```bibtex
738
+ @inproceedings{reimers-2019-sentence-bert,
739
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
740
+ author = "Reimers, Nils and Gurevych, Iryna",
741
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
742
+ month = "11",
743
+ year = "2019",
744
+ publisher = "Association for Computational Linguistics",
745
+ url = "https://arxiv.org/abs/1908.10084",
746
+ }
747
+ ```
748
+
749
+ #### MatryoshkaLoss
750
+ ```bibtex
751
+ @misc{kusupati2024matryoshka,
752
+ title={Matryoshka Representation Learning},
753
+ author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
754
+ year={2024},
755
+ eprint={2205.13147},
756
+ archivePrefix={arXiv},
757
+ primaryClass={cs.LG}
758
+ }
759
+ ```
760
+
761
+ #### MultipleNegativesRankingLoss
762
+ ```bibtex
763
+ @misc{henderson2017efficient,
764
+ title={Efficient Natural Language Response Suggestion for Smart Reply},
765
+ author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
766
+ year={2017},
767
+ eprint={1705.00652},
768
+ archivePrefix={arXiv},
769
+ primaryClass={cs.CL}
770
+ }
771
+ ```
772
+
773
+ <!--
774
+ ## Glossary
775
+
776
+ *Clearly define terms in order to be accessible across audiences.*
777
+ -->
778
+
779
+ <!--
780
+ ## Model Card Authors
781
+
782
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
783
+ -->
784
+
785
+ <!--
786
+ ## Model Card Contact
787
+
788
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
789
+ -->
config.json ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "BAAI/bge-base-en-v1.5",
3
+ "architectures": [
4
+ "BertModel"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "classifier_dropout": null,
8
+ "gradient_checkpointing": false,
9
+ "hidden_act": "gelu",
10
+ "hidden_dropout_prob": 0.1,
11
+ "hidden_size": 768,
12
+ "id2label": {
13
+ "0": "LABEL_0"
14
+ },
15
+ "initializer_range": 0.02,
16
+ "intermediate_size": 3072,
17
+ "label2id": {
18
+ "LABEL_0": 0
19
+ },
20
+ "layer_norm_eps": 1e-12,
21
+ "max_position_embeddings": 512,
22
+ "model_type": "bert",
23
+ "num_attention_heads": 12,
24
+ "num_hidden_layers": 12,
25
+ "pad_token_id": 0,
26
+ "position_embedding_type": "absolute",
27
+ "torch_dtype": "float32",
28
+ "transformers_version": "4.41.2",
29
+ "type_vocab_size": 2,
30
+ "use_cache": true,
31
+ "vocab_size": 30522
32
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "2.2.2",
4
+ "transformers": "4.28.1",
5
+ "pytorch": "1.13.0+cu117"
6
+ },
7
+ "prompts": {},
8
+ "default_prompt_name": null,
9
+ "similarity_fn_name": null
10
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dcbcacd6875d719664ae5db69e0c548e003636818b4dd43e47cd408beff4ef0d
3
+ size 437951328
modules.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ },
14
+ {
15
+ "idx": 2,
16
+ "name": "2",
17
+ "path": "2_Normalize",
18
+ "type": "sentence_transformers.models.Normalize"
19
+ }
20
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 512,
3
+ "do_lower_case": true
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": {
3
+ "content": "[CLS]",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "mask_token": {
10
+ "content": "[MASK]",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "[PAD]",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "sep_token": {
24
+ "content": "[SEP]",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "unk_token": {
31
+ "content": "[UNK]",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ }
37
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,57 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": true,
45
+ "cls_token": "[CLS]",
46
+ "do_basic_tokenize": true,
47
+ "do_lower_case": true,
48
+ "mask_token": "[MASK]",
49
+ "model_max_length": 512,
50
+ "never_split": null,
51
+ "pad_token": "[PAD]",
52
+ "sep_token": "[SEP]",
53
+ "strip_accents": null,
54
+ "tokenize_chinese_chars": true,
55
+ "tokenizer_class": "BertTokenizer",
56
+ "unk_token": "[UNK]"
57
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff