chuhac commited on
Commit
1297350
·
1 Parent(s): 719c830
README.md CHANGED
@@ -1,3 +1,1949 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - sentence-transformers
4
+ - feature-extraction
5
+ - sentence-similarity
6
+ - transformers
7
+ - mteb
8
+ license: apache-2.0
9
+ model-index:
10
+ - name: bge-en-icl
11
+ results:
12
+ - dataset:
13
+ config: en
14
+ name: MTEB AmazonCounterfactualClassification (en)
15
+ revision: e8379541af4e31359cca9fbcf4b00f2671dba205
16
+ split: test
17
+ type: mteb/amazon_counterfactual
18
+ metrics:
19
+ - type: accuracy
20
+ value: 93.1492537313433
21
+ - type: ap
22
+ value: 72.56132559564212
23
+ - type: f1
24
+ value: 89.71796898040243
25
+ - type: main_score
26
+ value: 93.1492537313433
27
+ task:
28
+ type: Classification
29
+ - dataset:
30
+ config: default
31
+ name: MTEB AmazonPolarityClassification
32
+ revision: e2d317d38cd51312af73b3d32a06d1a08b442046
33
+ split: test
34
+ type: mteb/amazon_polarity
35
+ metrics:
36
+ - type: accuracy
37
+ value: 96.98372499999999
38
+ - type: ap
39
+ value: 95.62303091773919
40
+ - type: f1
41
+ value: 96.98308191715637
42
+ - type: main_score
43
+ value: 96.98372499999999
44
+ task:
45
+ type: Classification
46
+ - dataset:
47
+ config: en
48
+ name: MTEB AmazonReviewsClassification (en)
49
+ revision: 1399c76144fd37290681b995c656ef9b2e06e26d
50
+ split: test
51
+ type: mteb/amazon_reviews_multi
52
+ metrics:
53
+ - type: accuracy
54
+ value: 61.461999999999996
55
+ - type: f1
56
+ value: 60.57257766583118
57
+ - type: main_score
58
+ value: 61.461999999999996
59
+ task:
60
+ type: Classification
61
+ - dataset:
62
+ config: default
63
+ name: MTEB ArguAna
64
+ revision: c22ab2a51041ffd869aaddef7af8d8215647e41a
65
+ split: test
66
+ type: mteb/arguana
67
+ metrics:
68
+ - type: main_score
69
+ value: 83.07967801208441
70
+ - type: ndcg_at_1
71
+ value: 66.50071123755335
72
+ - type: ndcg_at_3
73
+ value: 80.10869593172173
74
+ - type: ndcg_at_5
75
+ value: 81.89670542467924
76
+ - type: ndcg_at_10
77
+ value: 83.07967801208441
78
+ - type: ndcg_at_100
79
+ value: 83.5991349601075
80
+ - type: ndcg_at_1000
81
+ value: 83.5991349601075
82
+ - type: map_at_1
83
+ value: 66.50071123755335
84
+ - type: map_at_3
85
+ value: 76.83736367946898
86
+ - type: map_at_5
87
+ value: 77.8473210052158
88
+ - type: map_at_10
89
+ value: 78.35472690735851
90
+ - type: map_at_100
91
+ value: 78.47388207611678
92
+ - type: map_at_1000
93
+ value: 78.47388207611678
94
+ - type: precision_at_1
95
+ value: 66.50071123755335
96
+ - type: precision_at_3
97
+ value: 29.848269321953076
98
+ - type: precision_at_5
99
+ value: 18.762446657183045
100
+ - type: precision_at_10
101
+ value: 9.736842105262909
102
+ - type: precision_at_100
103
+ value: 0.9964438122332677
104
+ - type: precision_at_1000
105
+ value: 0.09964438122332549
106
+ - type: recall_at_1
107
+ value: 66.50071123755335
108
+ - type: recall_at_3
109
+ value: 89.5448079658606
110
+ - type: recall_at_5
111
+ value: 93.8122332859175
112
+ - type: recall_at_10
113
+ value: 97.36842105263158
114
+ - type: recall_at_100
115
+ value: 99.6443812233286
116
+ - type: recall_at_1000
117
+ value: 99.6443812233286
118
+ task:
119
+ type: Retrieval
120
+ - dataset:
121
+ config: default
122
+ name: MTEB ArxivClusteringP2P
123
+ revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
124
+ split: test
125
+ type: mteb/arxiv-clustering-p2p
126
+ metrics:
127
+ - type: main_score
128
+ value: 54.43859683357485
129
+ - type: v_measure
130
+ value: 54.43859683357485
131
+ - type: v_measure_std
132
+ value: 14.511128158596337
133
+ task:
134
+ type: Clustering
135
+ - dataset:
136
+ config: default
137
+ name: MTEB ArxivClusteringS2S
138
+ revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
139
+ split: test
140
+ type: mteb/arxiv-clustering-s2s
141
+ metrics:
142
+ - type: main_score
143
+ value: 49.33365996236564
144
+ - type: v_measure
145
+ value: 49.33365996236564
146
+ - type: v_measure_std
147
+ value: 14.61261944856548
148
+ task:
149
+ type: Clustering
150
+ - dataset:
151
+ config: default
152
+ name: MTEB AskUbuntuDupQuestions
153
+ revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
154
+ split: test
155
+ type: mteb/askubuntudupquestions-reranking
156
+ metrics:
157
+ - type: main_score
158
+ value: 65.15263966490278
159
+ - type: map
160
+ value: 65.15263966490278
161
+ - type: mrr
162
+ value: 77.90331090885107
163
+ task:
164
+ type: Reranking
165
+ - dataset:
166
+ config: default
167
+ name: MTEB BIOSSES
168
+ revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
169
+ split: test
170
+ type: mteb/biosses-sts
171
+ metrics:
172
+ - type: main_score
173
+ value: 86.47365710792691
174
+ - type: cosine_spearman
175
+ value: 86.47365710792691
176
+ - type: spearman
177
+ value: 86.47365710792691
178
+ task:
179
+ type: STS
180
+ - dataset:
181
+ config: default
182
+ name: MTEB Banking77Classification
183
+ revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
184
+ split: test
185
+ type: mteb/banking77
186
+ metrics:
187
+ - type: accuracy
188
+ value: 91.48701298701299
189
+ - type: f1
190
+ value: 91.4733869423637
191
+ - type: main_score
192
+ value: 91.48701298701299
193
+ task:
194
+ type: Classification
195
+ - dataset:
196
+ config: default
197
+ name: MTEB BiorxivClusteringP2P
198
+ revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
199
+ split: test
200
+ type: mteb/biorxiv-clustering-p2p
201
+ metrics:
202
+ - type: main_score
203
+ value: 53.050461108038036
204
+ - type: v_measure
205
+ value: 53.050461108038036
206
+ - type: v_measure_std
207
+ value: 0.9436104839012786
208
+ task:
209
+ type: Clustering
210
+ - dataset:
211
+ config: default
212
+ name: MTEB BiorxivClusteringS2S
213
+ revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
214
+ split: test
215
+ type: mteb/biorxiv-clustering-s2s
216
+ metrics:
217
+ - type: main_score
218
+ value: 48.38215568371151
219
+ - type: v_measure
220
+ value: 48.38215568371151
221
+ - type: v_measure_std
222
+ value: 0.9104384504649026
223
+ task:
224
+ type: Clustering
225
+ - dataset:
226
+ config: default
227
+ name: MTEB CQADupstackRetrieval
228
+ revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4
229
+ split: test
230
+ type: mteb/cqadupstack
231
+ metrics:
232
+ - type: main_score
233
+ value: 47.308084499970704
234
+ - type: ndcg_at_1
235
+ value: 36.038578730542476
236
+ - type: ndcg_at_3
237
+ value: 41.931365356453036
238
+ - type: ndcg_at_5
239
+ value: 44.479015523894994
240
+ - type: ndcg_at_10
241
+ value: 47.308084499970704
242
+ - type: ndcg_at_100
243
+ value: 52.498062430513606
244
+ - type: ndcg_at_1000
245
+ value: 54.2908789514719
246
+ - type: map_at_1
247
+ value: 30.38821701528966
248
+ - type: map_at_3
249
+ value: 37.974871761903636
250
+ - type: map_at_5
251
+ value: 39.85399878507757
252
+ - type: map_at_10
253
+ value: 41.31456611036795
254
+ - type: map_at_100
255
+ value: 42.62907836655835
256
+ - type: map_at_1000
257
+ value: 42.737235870659845
258
+ - type: precision_at_1
259
+ value: 36.038578730542476
260
+ - type: precision_at_3
261
+ value: 19.39960180094633
262
+ - type: precision_at_5
263
+ value: 13.79264655952497
264
+ - type: precision_at_10
265
+ value: 8.399223517333388
266
+ - type: precision_at_100
267
+ value: 1.2992373779520896
268
+ - type: precision_at_1000
269
+ value: 0.16327170951909567
270
+ - type: recall_at_1
271
+ value: 30.38821701528966
272
+ - type: recall_at_3
273
+ value: 45.51645512564165
274
+ - type: recall_at_5
275
+ value: 52.06077167834868
276
+ - type: recall_at_10
277
+ value: 60.38864106788279
278
+ - type: recall_at_100
279
+ value: 82.76968509918343
280
+ - type: recall_at_1000
281
+ value: 94.84170217080344
282
+ task:
283
+ type: Retrieval
284
+ - dataset:
285
+ config: default
286
+ name: MTEB ClimateFEVER
287
+ revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380
288
+ split: test
289
+ type: mteb/climate-fever
290
+ metrics:
291
+ - type: main_score
292
+ value: 45.4272998284769
293
+ - type: ndcg_at_1
294
+ value: 44.36482084690554
295
+ - type: ndcg_at_3
296
+ value: 38.13005747178844
297
+ - type: ndcg_at_5
298
+ value: 40.83474510717123
299
+ - type: ndcg_at_10
300
+ value: 45.4272998284769
301
+ - type: ndcg_at_100
302
+ value: 52.880220707479516
303
+ - type: ndcg_at_1000
304
+ value: 55.364753427333
305
+ - type: map_at_1
306
+ value: 19.200868621064064
307
+ - type: map_at_3
308
+ value: 28.33785740137525
309
+ - type: map_at_5
310
+ value: 31.67162504524064
311
+ - type: map_at_10
312
+ value: 34.417673164090075
313
+ - type: map_at_100
314
+ value: 36.744753097028976
315
+ - type: map_at_1000
316
+ value: 36.91262189016135
317
+ - type: precision_at_1
318
+ value: 44.36482084690554
319
+ - type: precision_at_3
320
+ value: 29.14223669923975
321
+ - type: precision_at_5
322
+ value: 22.410423452768388
323
+ - type: precision_at_10
324
+ value: 14.293159609120309
325
+ - type: precision_at_100
326
+ value: 2.248859934853431
327
+ - type: precision_at_1000
328
+ value: 0.2722475570032542
329
+ - type: recall_at_1
330
+ value: 19.200868621064064
331
+ - type: recall_at_3
332
+ value: 34.132464712269176
333
+ - type: recall_at_5
334
+ value: 42.35613463626491
335
+ - type: recall_at_10
336
+ value: 52.50814332247546
337
+ - type: recall_at_100
338
+ value: 77.16178067318128
339
+ - type: recall_at_1000
340
+ value: 90.59174809989138
341
+ task:
342
+ type: Retrieval
343
+ - dataset:
344
+ config: default
345
+ name: MTEB DBPedia
346
+ revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659
347
+ split: test
348
+ type: mteb/dbpedia
349
+ metrics:
350
+ - type: main_score
351
+ value: 51.634197691802754
352
+ - type: ndcg_at_1
353
+ value: 64.375
354
+ - type: ndcg_at_3
355
+ value: 55.677549598242614
356
+ - type: ndcg_at_5
357
+ value: 53.44347199908503
358
+ - type: ndcg_at_10
359
+ value: 51.634197691802754
360
+ - type: ndcg_at_100
361
+ value: 56.202861267183415
362
+ - type: ndcg_at_1000
363
+ value: 63.146019108272576
364
+ - type: map_at_1
365
+ value: 9.789380503780919
366
+ - type: map_at_3
367
+ value: 16.146582195277016
368
+ - type: map_at_5
369
+ value: 19.469695222167193
370
+ - type: map_at_10
371
+ value: 24.163327344766145
372
+ - type: map_at_100
373
+ value: 35.47047690245571
374
+ - type: map_at_1000
375
+ value: 37.5147432331838
376
+ - type: precision_at_1
377
+ value: 76.25
378
+ - type: precision_at_3
379
+ value: 59.08333333333333
380
+ - type: precision_at_5
381
+ value: 52.24999999999997
382
+ - type: precision_at_10
383
+ value: 42.54999999999994
384
+ - type: precision_at_100
385
+ value: 13.460000000000008
386
+ - type: precision_at_1000
387
+ value: 2.4804999999999966
388
+ - type: recall_at_1
389
+ value: 9.789380503780919
390
+ - type: recall_at_3
391
+ value: 17.48487134027656
392
+ - type: recall_at_5
393
+ value: 22.312024269698806
394
+ - type: recall_at_10
395
+ value: 30.305380335237324
396
+ - type: recall_at_100
397
+ value: 62.172868946596424
398
+ - type: recall_at_1000
399
+ value: 85.32410301328747
400
+ task:
401
+ type: Retrieval
402
+ - dataset:
403
+ config: default
404
+ name: MTEB EmotionClassification
405
+ revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
406
+ split: test
407
+ type: mteb/emotion
408
+ metrics:
409
+ - type: accuracy
410
+ value: 93.36
411
+ - type: f1
412
+ value: 89.73665936982262
413
+ - type: main_score
414
+ value: 93.36
415
+ task:
416
+ type: Classification
417
+ - dataset:
418
+ config: default
419
+ name: MTEB FEVER
420
+ revision: bea83ef9e8fb933d90a2f1d5515737465d613e12
421
+ split: test
422
+ type: mteb/fever
423
+ metrics:
424
+ - type: main_score
425
+ value: 92.82809814626805
426
+ - type: ndcg_at_1
427
+ value: 88.98889888988899
428
+ - type: ndcg_at_3
429
+ value: 91.82404417747676
430
+ - type: ndcg_at_5
431
+ value: 92.41785792357787
432
+ - type: ndcg_at_10
433
+ value: 92.82809814626805
434
+ - type: ndcg_at_100
435
+ value: 93.31730867509245
436
+ - type: ndcg_at_1000
437
+ value: 93.45171203408582
438
+ - type: map_at_1
439
+ value: 82.64125817343636
440
+ - type: map_at_3
441
+ value: 89.39970782792554
442
+ - type: map_at_5
443
+ value: 89.96799501378695
444
+ - type: map_at_10
445
+ value: 90.27479706587437
446
+ - type: map_at_100
447
+ value: 90.45185655778057
448
+ - type: map_at_1000
449
+ value: 90.46130471574544
450
+ - type: precision_at_1
451
+ value: 88.98889888988899
452
+ - type: precision_at_3
453
+ value: 34.923492349234245
454
+ - type: precision_at_5
455
+ value: 21.524152415244043
456
+ - type: precision_at_10
457
+ value: 11.033603360337315
458
+ - type: precision_at_100
459
+ value: 1.1521152115211895
460
+ - type: precision_at_1000
461
+ value: 0.11765676567657675
462
+ - type: recall_at_1
463
+ value: 82.64125817343636
464
+ - type: recall_at_3
465
+ value: 94.35195900542428
466
+ - type: recall_at_5
467
+ value: 95.9071323799047
468
+ - type: recall_at_10
469
+ value: 97.04234113887586
470
+ - type: recall_at_100
471
+ value: 98.77282371094255
472
+ - type: recall_at_1000
473
+ value: 99.5555567461508
474
+ task:
475
+ type: Retrieval
476
+ - dataset:
477
+ config: default
478
+ name: MTEB FiQA2018
479
+ revision: 27a168819829fe9bcd655c2df245fb19452e8e06
480
+ split: test
481
+ type: mteb/fiqa
482
+ metrics:
483
+ - type: main_score
484
+ value: 59.67151242793314
485
+ - type: ndcg_at_1
486
+ value: 57.407407407407405
487
+ - type: ndcg_at_3
488
+ value: 53.79975378289304
489
+ - type: ndcg_at_5
490
+ value: 56.453379423655406
491
+ - type: ndcg_at_10
492
+ value: 59.67151242793314
493
+ - type: ndcg_at_100
494
+ value: 65.34055762539253
495
+ - type: ndcg_at_1000
496
+ value: 67.07707746043032
497
+ - type: map_at_1
498
+ value: 30.65887045053714
499
+ - type: map_at_3
500
+ value: 44.09107110881799
501
+ - type: map_at_5
502
+ value: 48.18573748068346
503
+ - type: map_at_10
504
+ value: 51.03680979612876
505
+ - type: map_at_100
506
+ value: 53.03165194566928
507
+ - type: map_at_1000
508
+ value: 53.16191096190861
509
+ - type: precision_at_1
510
+ value: 57.407407407407405
511
+ - type: precision_at_3
512
+ value: 35.493827160493886
513
+ - type: precision_at_5
514
+ value: 26.913580246913547
515
+ - type: precision_at_10
516
+ value: 16.435185185185155
517
+ - type: precision_at_100
518
+ value: 2.2685185185184986
519
+ - type: precision_at_1000
520
+ value: 0.25864197530863964
521
+ - type: recall_at_1
522
+ value: 30.65887045053714
523
+ - type: recall_at_3
524
+ value: 48.936723427464194
525
+ - type: recall_at_5
526
+ value: 58.55942925387371
527
+ - type: recall_at_10
528
+ value: 68.45128551147073
529
+ - type: recall_at_100
530
+ value: 88.24599311867836
531
+ - type: recall_at_1000
532
+ value: 98.18121693121691
533
+ task:
534
+ type: Retrieval
535
+ - dataset:
536
+ config: default
537
+ name: MTEB HotpotQA
538
+ revision: ab518f4d6fcca38d87c25209f94beba119d02014
539
+ split: test
540
+ type: mteb/hotpotqa
541
+ metrics:
542
+ - type: main_score
543
+ value: 85.13780800141961
544
+ - type: ndcg_at_1
545
+ value: 89.9392302498312
546
+ - type: ndcg_at_3
547
+ value: 81.2061569376288
548
+ - type: ndcg_at_5
549
+ value: 83.53311592078133
550
+ - type: ndcg_at_10
551
+ value: 85.13780800141961
552
+ - type: ndcg_at_100
553
+ value: 87.02630661625386
554
+ - type: ndcg_at_1000
555
+ value: 87.47294723601075
556
+ - type: map_at_1
557
+ value: 44.9696151249156
558
+ - type: map_at_3
559
+ value: 76.46972766148966
560
+ - type: map_at_5
561
+ value: 78.47749268512187
562
+ - type: map_at_10
563
+ value: 79.49792611170005
564
+ - type: map_at_100
565
+ value: 80.09409086274644
566
+ - type: map_at_1000
567
+ value: 80.11950878917663
568
+ - type: precision_at_1
569
+ value: 89.9392302498312
570
+ - type: precision_at_3
571
+ value: 53.261309925724234
572
+ - type: precision_at_5
573
+ value: 33.79338284942924
574
+ - type: precision_at_10
575
+ value: 17.69750168805041
576
+ - type: precision_at_100
577
+ value: 1.9141120864280805
578
+ - type: precision_at_1000
579
+ value: 0.19721809588118133
580
+ - type: recall_at_1
581
+ value: 44.9696151249156
582
+ - type: recall_at_3
583
+ value: 79.8919648885888
584
+ - type: recall_at_5
585
+ value: 84.48345712356516
586
+ - type: recall_at_10
587
+ value: 88.48750844024308
588
+ - type: recall_at_100
589
+ value: 95.70560432140446
590
+ - type: recall_at_1000
591
+ value: 98.60904794058068
592
+ task:
593
+ type: Retrieval
594
+ - dataset:
595
+ config: default
596
+ name: MTEB ImdbClassification
597
+ revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
598
+ split: test
599
+ type: mteb/imdb
600
+ metrics:
601
+ - type: accuracy
602
+ value: 96.9144
603
+ - type: ap
604
+ value: 95.45276911068486
605
+ - type: f1
606
+ value: 96.91412729455966
607
+ - type: main_score
608
+ value: 96.9144
609
+ task:
610
+ type: Classification
611
+ - dataset:
612
+ config: default
613
+ name: MTEB MSMARCO
614
+ revision: c5a29a104738b98a9e76336939199e264163d4a0
615
+ split: dev
616
+ type: mteb/msmarco
617
+ metrics:
618
+ - type: main_score
619
+ value: 46.78865753107054
620
+ - type: ndcg_at_1
621
+ value: 26.63323782234957
622
+ - type: ndcg_at_3
623
+ value: 38.497585804985754
624
+ - type: ndcg_at_5
625
+ value: 42.72761631631636
626
+ - type: ndcg_at_10
627
+ value: 46.78865753107054
628
+ - type: ndcg_at_100
629
+ value: 51.96170786623209
630
+ - type: ndcg_at_1000
631
+ value: 52.82713901970963
632
+ - type: map_at_1
633
+ value: 25.89063992359121
634
+ - type: map_at_3
635
+ value: 35.299466730340654
636
+ - type: map_at_5
637
+ value: 37.68771887933786
638
+ - type: map_at_10
639
+ value: 39.40908074468253
640
+ - type: map_at_100
641
+ value: 40.53444082323405
642
+ - type: map_at_1000
643
+ value: 40.57183037649452
644
+ - type: precision_at_1
645
+ value: 26.63323782234957
646
+ - type: precision_at_3
647
+ value: 16.265520534861793
648
+ - type: precision_at_5
649
+ value: 11.902578796562304
650
+ - type: precision_at_10
651
+ value: 7.262177650430416
652
+ - type: precision_at_100
653
+ value: 0.9819484240687512
654
+ - type: precision_at_1000
655
+ value: 0.10571633237823287
656
+ - type: recall_at_1
657
+ value: 25.89063992359121
658
+ - type: recall_at_3
659
+ value: 46.99737344794652
660
+ - type: recall_at_5
661
+ value: 57.160936007640906
662
+ - type: recall_at_10
663
+ value: 69.43409742120343
664
+ - type: recall_at_100
665
+ value: 92.86413562559697
666
+ - type: recall_at_1000
667
+ value: 99.3230659025788
668
+ task:
669
+ type: Retrieval
670
+ - dataset:
671
+ config: en
672
+ name: MTEB MTOPDomainClassification (en)
673
+ revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
674
+ split: test
675
+ type: mteb/mtop_domain
676
+ metrics:
677
+ - type: accuracy
678
+ value: 98.42225262197901
679
+ - type: f1
680
+ value: 98.31652547061115
681
+ - type: main_score
682
+ value: 98.42225262197901
683
+ task:
684
+ type: Classification
685
+ - dataset:
686
+ config: en
687
+ name: MTEB MTOPIntentClassification (en)
688
+ revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
689
+ split: test
690
+ type: mteb/mtop_intent
691
+ metrics:
692
+ - type: accuracy
693
+ value: 94.00136798905609
694
+ - type: f1
695
+ value: 82.7022316533099
696
+ - type: main_score
697
+ value: 94.00136798905609
698
+ task:
699
+ type: Classification
700
+ - dataset:
701
+ config: en
702
+ name: MTEB MassiveIntentClassification (en)
703
+ revision: 4672e20407010da34463acc759c162ca9734bca6
704
+ split: test
705
+ type: mteb/amazon_massive_intent
706
+ metrics:
707
+ - type: accuracy
708
+ value: 82.92535305985204
709
+ - type: f1
710
+ value: 79.885538231847
711
+ - type: main_score
712
+ value: 82.92535305985204
713
+ task:
714
+ type: Classification
715
+ - dataset:
716
+ config: en
717
+ name: MTEB MassiveScenarioClassification (en)
718
+ revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
719
+ split: test
720
+ type: mteb/amazon_massive_scenario
721
+ metrics:
722
+ - type: accuracy
723
+ value: 85.60188298587758
724
+ - type: f1
725
+ value: 84.87416963499224
726
+ - type: main_score
727
+ value: 85.60188298587758
728
+ task:
729
+ type: Classification
730
+ - dataset:
731
+ config: default
732
+ name: MTEB MedrxivClusteringP2P
733
+ revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
734
+ split: test
735
+ type: mteb/medrxiv-clustering-p2p
736
+ metrics:
737
+ - type: main_score
738
+ value: 45.86171497327639
739
+ - type: v_measure
740
+ value: 45.86171497327639
741
+ - type: v_measure_std
742
+ value: 1.551347259003324
743
+ task:
744
+ type: Clustering
745
+ - dataset:
746
+ config: default
747
+ name: MTEB MedrxivClusteringS2S
748
+ revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
749
+ split: test
750
+ type: mteb/medrxiv-clustering-s2s
751
+ metrics:
752
+ - type: main_score
753
+ value: 44.33336692345644
754
+ - type: v_measure
755
+ value: 44.33336692345644
756
+ - type: v_measure_std
757
+ value: 1.5931408596404715
758
+ task:
759
+ type: Clustering
760
+ - dataset:
761
+ config: default
762
+ name: MTEB MindSmallReranking
763
+ revision: 59042f120c80e8afa9cdbb224f67076cec0fc9a7
764
+ split: test
765
+ type: mteb/mind_small
766
+ metrics:
767
+ - type: main_score
768
+ value: 30.597409734750503
769
+ - type: map
770
+ value: 30.597409734750503
771
+ - type: mrr
772
+ value: 31.397041548018457
773
+ task:
774
+ type: Reranking
775
+ - dataset:
776
+ config: default
777
+ name: MTEB NFCorpus
778
+ revision: ec0fa4fe99da2ff19ca1214b7966684033a58814
779
+ split: test
780
+ type: mteb/nfcorpus
781
+ metrics:
782
+ - type: main_score
783
+ value: 41.850870119787835
784
+ - type: ndcg_at_1
785
+ value: 52.47678018575851
786
+ - type: ndcg_at_3
787
+ value: 47.43993801247414
788
+ - type: ndcg_at_5
789
+ value: 45.08173173082719
790
+ - type: ndcg_at_10
791
+ value: 41.850870119787835
792
+ - type: ndcg_at_100
793
+ value: 37.79284946590978
794
+ - type: ndcg_at_1000
795
+ value: 46.58046062123418
796
+ - type: map_at_1
797
+ value: 6.892464464226138
798
+ - type: map_at_3
799
+ value: 12.113195798233127
800
+ - type: map_at_5
801
+ value: 13.968475602788812
802
+ - type: map_at_10
803
+ value: 16.47564069781326
804
+ - type: map_at_100
805
+ value: 20.671726065190025
806
+ - type: map_at_1000
807
+ value: 22.328875914012006
808
+ - type: precision_at_1
809
+ value: 53.86996904024768
810
+ - type: precision_at_3
811
+ value: 43.96284829721363
812
+ - type: precision_at_5
813
+ value: 38.69969040247682
814
+ - type: precision_at_10
815
+ value: 30.928792569659457
816
+ - type: precision_at_100
817
+ value: 9.507739938080498
818
+ - type: precision_at_1000
819
+ value: 2.25882352941176
820
+ - type: recall_at_1
821
+ value: 6.892464464226138
822
+ - type: recall_at_3
823
+ value: 13.708153358278407
824
+ - type: recall_at_5
825
+ value: 16.651919797359145
826
+ - type: recall_at_10
827
+ value: 21.01801714352559
828
+ - type: recall_at_100
829
+ value: 37.01672102843443
830
+ - type: recall_at_1000
831
+ value: 69.8307270724072
832
+ task:
833
+ type: Retrieval
834
+ - dataset:
835
+ config: default
836
+ name: MTEB NQ
837
+ revision: b774495ed302d8c44a3a7ea25c90dbce03968f31
838
+ split: test
839
+ type: mteb/nq
840
+ metrics:
841
+ - type: main_score
842
+ value: 73.88350836507092
843
+ - type: ndcg_at_1
844
+ value: 57.0683661645423
845
+ - type: ndcg_at_3
846
+ value: 67.89935813080585
847
+ - type: ndcg_at_5
848
+ value: 71.47769719452941
849
+ - type: ndcg_at_10
850
+ value: 73.88350836507092
851
+ - type: ndcg_at_100
852
+ value: 75.76561068060907
853
+ - type: ndcg_at_1000
854
+ value: 75.92437662684215
855
+ - type: map_at_1
856
+ value: 51.00424874468904
857
+ - type: map_at_3
858
+ value: 63.87359984550011
859
+ - type: map_at_5
860
+ value: 66.23696407879494
861
+ - type: map_at_10
862
+ value: 67.42415446608673
863
+ - type: map_at_100
864
+ value: 67.92692839842621
865
+ - type: map_at_1000
866
+ value: 67.93437922640133
867
+ - type: precision_at_1
868
+ value: 57.0683661645423
869
+ - type: precision_at_3
870
+ value: 29.692931633836416
871
+ - type: precision_at_5
872
+ value: 20.046349942062854
873
+ - type: precision_at_10
874
+ value: 10.950173812283
875
+ - type: precision_at_100
876
+ value: 1.1995944380069687
877
+ - type: precision_at_1000
878
+ value: 0.12146581691772171
879
+ - type: recall_at_1
880
+ value: 51.00424874468904
881
+ - type: recall_at_3
882
+ value: 75.93665507918116
883
+ - type: recall_at_5
884
+ value: 83.95133256083433
885
+ - type: recall_at_10
886
+ value: 90.78794901506375
887
+ - type: recall_at_100
888
+ value: 98.61915797605253
889
+ - type: recall_at_1000
890
+ value: 99.7827346465817
891
+ task:
892
+ type: Retrieval
893
+ - dataset:
894
+ config: default
895
+ name: MTEB QuoraRetrieval
896
+ revision: e4e08e0b7dbe3c8700f0daef558ff32256715259
897
+ split: test
898
+ type: mteb/quora
899
+ metrics:
900
+ - type: main_score
901
+ value: 90.95410848372035
902
+ - type: ndcg_at_1
903
+ value: 84.61999999999999
904
+ - type: ndcg_at_3
905
+ value: 88.57366734033212
906
+ - type: ndcg_at_5
907
+ value: 89.89804048972175
908
+ - type: ndcg_at_10
909
+ value: 90.95410848372035
910
+ - type: ndcg_at_100
911
+ value: 91.83227134455773
912
+ - type: ndcg_at_1000
913
+ value: 91.88368412611601
914
+ - type: map_at_1
915
+ value: 73.4670089207039
916
+ - type: map_at_3
917
+ value: 84.87862925508942
918
+ - type: map_at_5
919
+ value: 86.68002324701408
920
+ - type: map_at_10
921
+ value: 87.7165466015312
922
+ - type: map_at_100
923
+ value: 88.28718809614146
924
+ - type: map_at_1000
925
+ value: 88.29877148480672
926
+ - type: precision_at_1
927
+ value: 84.61999999999999
928
+ - type: precision_at_3
929
+ value: 38.82333333333838
930
+ - type: precision_at_5
931
+ value: 25.423999999998642
932
+ - type: precision_at_10
933
+ value: 13.787999999998583
934
+ - type: precision_at_100
935
+ value: 1.5442999999999767
936
+ - type: precision_at_1000
937
+ value: 0.15672999999997972
938
+ - type: recall_at_1
939
+ value: 73.4670089207039
940
+ - type: recall_at_3
941
+ value: 89.98389854832143
942
+ - type: recall_at_5
943
+ value: 93.88541046010576
944
+ - type: recall_at_10
945
+ value: 96.99779417520634
946
+ - type: recall_at_100
947
+ value: 99.80318763957743
948
+ - type: recall_at_1000
949
+ value: 99.99638888888889
950
+ task:
951
+ type: Retrieval
952
+ - dataset:
953
+ config: default
954
+ name: MTEB RedditClustering
955
+ revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
956
+ split: test
957
+ type: mteb/reddit-clustering
958
+ metrics:
959
+ - type: main_score
960
+ value: 72.33008348681277
961
+ - type: v_measure
962
+ value: 72.33008348681277
963
+ - type: v_measure_std
964
+ value: 2.9203215463933008
965
+ task:
966
+ type: Clustering
967
+ - dataset:
968
+ config: default
969
+ name: MTEB RedditClusteringP2P
970
+ revision: 385e3cb46b4cfa89021f56c4380204149d0efe33
971
+ split: test
972
+ type: mteb/reddit-clustering-p2p
973
+ metrics:
974
+ - type: main_score
975
+ value: 72.72079657828903
976
+ - type: v_measure
977
+ value: 72.72079657828903
978
+ - type: v_measure_std
979
+ value: 11.930271663428735
980
+ task:
981
+ type: Clustering
982
+ - dataset:
983
+ config: default
984
+ name: MTEB SCIDOCS
985
+ revision: f8c2fcf00f625baaa80f62ec5bd9e1fff3b8ae88
986
+ split: test
987
+ type: mteb/scidocs
988
+ metrics:
989
+ - type: main_score
990
+ value: 25.25865384510787
991
+ - type: ndcg_at_1
992
+ value: 28.7
993
+ - type: ndcg_at_3
994
+ value: 23.61736427940938
995
+ - type: ndcg_at_5
996
+ value: 20.845690325673885
997
+ - type: ndcg_at_10
998
+ value: 25.25865384510787
999
+ - type: ndcg_at_100
1000
+ value: 36.18596641088721
1001
+ - type: ndcg_at_1000
1002
+ value: 41.7166868935345
1003
+ - type: map_at_1
1004
+ value: 5.828333333333361
1005
+ - type: map_at_3
1006
+ value: 10.689166666666676
1007
+ - type: map_at_5
1008
+ value: 13.069916666666668
1009
+ - type: map_at_10
1010
+ value: 15.4901164021164
1011
+ - type: map_at_100
1012
+ value: 18.61493245565425
1013
+ - type: map_at_1000
1014
+ value: 18.99943478016456
1015
+ - type: precision_at_1
1016
+ value: 28.7
1017
+ - type: precision_at_3
1018
+ value: 22.30000000000006
1019
+ - type: precision_at_5
1020
+ value: 18.55999999999997
1021
+ - type: precision_at_10
1022
+ value: 13.289999999999946
1023
+ - type: precision_at_100
1024
+ value: 2.905000000000005
1025
+ - type: precision_at_1000
1026
+ value: 0.4218999999999946
1027
+ - type: recall_at_1
1028
+ value: 5.828333333333361
1029
+ - type: recall_at_3
1030
+ value: 13.548333333333387
1031
+ - type: recall_at_5
1032
+ value: 18.778333333333308
1033
+ - type: recall_at_10
1034
+ value: 26.939999999999902
1035
+ - type: recall_at_100
1036
+ value: 58.91333333333344
1037
+ - type: recall_at_1000
1038
+ value: 85.57499999999972
1039
+ task:
1040
+ type: Retrieval
1041
+ - dataset:
1042
+ config: default
1043
+ name: MTEB SICK-R
1044
+ revision: 20a6d6f312dd54037fe07a32d58e5e168867909d
1045
+ split: test
1046
+ type: mteb/sickr-sts
1047
+ metrics:
1048
+ - type: main_score
1049
+ value: 83.86733787791422
1050
+ - type: cosine_spearman
1051
+ value: 83.86733787791422
1052
+ - type: spearman
1053
+ value: 83.86733787791422
1054
+ task:
1055
+ type: STS
1056
+ - dataset:
1057
+ config: default
1058
+ name: MTEB STS12
1059
+ revision: a0d554a64d88156834ff5ae9920b964011b16384
1060
+ split: test
1061
+ type: mteb/sts12-sts
1062
+ metrics:
1063
+ - type: main_score
1064
+ value: 78.14269330480724
1065
+ - type: cosine_spearman
1066
+ value: 78.14269330480724
1067
+ - type: spearman
1068
+ value: 78.14269330480724
1069
+ task:
1070
+ type: STS
1071
+ - dataset:
1072
+ config: default
1073
+ name: MTEB STS13
1074
+ revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
1075
+ split: test
1076
+ type: mteb/sts13-sts
1077
+ metrics:
1078
+ - type: main_score
1079
+ value: 86.58640009300751
1080
+ - type: cosine_spearman
1081
+ value: 86.58640009300751
1082
+ - type: spearman
1083
+ value: 86.58640009300751
1084
+ task:
1085
+ type: STS
1086
+ - dataset:
1087
+ config: default
1088
+ name: MTEB STS14
1089
+ revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
1090
+ split: test
1091
+ type: mteb/sts14-sts
1092
+ metrics:
1093
+ - type: main_score
1094
+ value: 82.8292579957437
1095
+ - type: cosine_spearman
1096
+ value: 82.8292579957437
1097
+ - type: spearman
1098
+ value: 82.8292579957437
1099
+ task:
1100
+ type: STS
1101
+ - dataset:
1102
+ config: default
1103
+ name: MTEB STS15
1104
+ revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
1105
+ split: test
1106
+ type: mteb/sts15-sts
1107
+ metrics:
1108
+ - type: main_score
1109
+ value: 87.77203714228862
1110
+ - type: cosine_spearman
1111
+ value: 87.77203714228862
1112
+ - type: spearman
1113
+ value: 87.77203714228862
1114
+ task:
1115
+ type: STS
1116
+ - dataset:
1117
+ config: default
1118
+ name: MTEB STS16
1119
+ revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
1120
+ split: test
1121
+ type: mteb/sts16-sts
1122
+ metrics:
1123
+ - type: main_score
1124
+ value: 87.0439304006969
1125
+ - type: cosine_spearman
1126
+ value: 87.0439304006969
1127
+ - type: spearman
1128
+ value: 87.0439304006969
1129
+ task:
1130
+ type: STS
1131
+ - dataset:
1132
+ config: en-en
1133
+ name: MTEB STS17 (en-en)
1134
+ revision: faeb762787bd10488a50c8b5be4a3b82e411949c
1135
+ split: test
1136
+ type: mteb/sts17-crosslingual-sts
1137
+ metrics:
1138
+ - type: main_score
1139
+ value: 91.24736138013424
1140
+ - type: cosine_spearman
1141
+ value: 91.24736138013424
1142
+ - type: spearman
1143
+ value: 91.24736138013424
1144
+ task:
1145
+ type: STS
1146
+ - dataset:
1147
+ config: en
1148
+ name: MTEB STS22 (en)
1149
+ revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
1150
+ split: test
1151
+ type: mteb/sts22-crosslingual-sts
1152
+ metrics:
1153
+ - type: main_score
1154
+ value: 70.07326214706
1155
+ - type: cosine_spearman
1156
+ value: 70.07326214706
1157
+ - type: spearman
1158
+ value: 70.07326214706
1159
+ task:
1160
+ type: STS
1161
+ - dataset:
1162
+ config: default
1163
+ name: MTEB STSBenchmark
1164
+ revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
1165
+ split: test
1166
+ type: mteb/stsbenchmark-sts
1167
+ metrics:
1168
+ - type: main_score
1169
+ value: 88.42076443255168
1170
+ - type: cosine_spearman
1171
+ value: 88.42076443255168
1172
+ - type: spearman
1173
+ value: 88.42076443255168
1174
+ task:
1175
+ type: STS
1176
+ - dataset:
1177
+ config: default
1178
+ name: MTEB SciDocsRR
1179
+ revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
1180
+ split: test
1181
+ type: mteb/scidocs-reranking
1182
+ metrics:
1183
+ - type: main_score
1184
+ value: 86.9584489124583
1185
+ - type: map
1186
+ value: 86.9584489124583
1187
+ - type: mrr
1188
+ value: 96.59475328592976
1189
+ task:
1190
+ type: Reranking
1191
+ - dataset:
1192
+ config: default
1193
+ name: MTEB SciFact
1194
+ revision: 0228b52cf27578f30900b9e5271d331663a030d7
1195
+ split: test
1196
+ type: mteb/scifact
1197
+ metrics:
1198
+ - type: main_score
1199
+ value: 79.09159079425369
1200
+ - type: ndcg_at_1
1201
+ value: 66.0
1202
+ - type: ndcg_at_3
1203
+ value: 74.98853481223065
1204
+ - type: ndcg_at_5
1205
+ value: 77.29382051205019
1206
+ - type: ndcg_at_10
1207
+ value: 79.09159079425369
1208
+ - type: ndcg_at_100
1209
+ value: 80.29692802526776
1210
+ - type: ndcg_at_1000
1211
+ value: 80.55210036585547
1212
+ - type: map_at_1
1213
+ value: 62.994444444444454
1214
+ - type: map_at_3
1215
+ value: 71.7425925925926
1216
+ - type: map_at_5
1217
+ value: 73.6200925925926
1218
+ - type: map_at_10
1219
+ value: 74.50223544973547
1220
+ - type: map_at_100
1221
+ value: 74.82438594015447
1222
+ - type: map_at_1000
1223
+ value: 74.83420474892468
1224
+ - type: precision_at_1
1225
+ value: 66.0
1226
+ - type: precision_at_3
1227
+ value: 29.44444444444439
1228
+ - type: precision_at_5
1229
+ value: 19.40000000000008
1230
+ - type: precision_at_10
1231
+ value: 10.366666666666715
1232
+ - type: precision_at_100
1233
+ value: 1.0999999999999928
1234
+ - type: precision_at_1000
1235
+ value: 0.11200000000000007
1236
+ - type: recall_at_1
1237
+ value: 62.994444444444454
1238
+ - type: recall_at_3
1239
+ value: 80.89999999999998
1240
+ - type: recall_at_5
1241
+ value: 86.72777777777779
1242
+ - type: recall_at_10
1243
+ value: 91.88888888888887
1244
+ - type: recall_at_100
1245
+ value: 97.0
1246
+ - type: recall_at_1000
1247
+ value: 99.0
1248
+ task:
1249
+ type: Retrieval
1250
+ - dataset:
1251
+ config: default
1252
+ name: MTEB SprintDuplicateQuestions
1253
+ revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
1254
+ split: test
1255
+ type: mteb/sprintduplicatequestions-pairclassification
1256
+ metrics:
1257
+ - type: main_score
1258
+ value: 97.26819027722253
1259
+ - type: cos_sim_accuracy
1260
+ value: 99.88019801980198
1261
+ - type: cos_sim_accuracy_threshold
1262
+ value: 76.67685151100159
1263
+ - type: cos_sim_ap
1264
+ value: 97.23260568085786
1265
+ - type: cos_sim_f1
1266
+ value: 93.91824526420737
1267
+ - type: cos_sim_f1_threshold
1268
+ value: 75.82710981369019
1269
+ - type: cos_sim_precision
1270
+ value: 93.63817097415506
1271
+ - type: cos_sim_recall
1272
+ value: 94.19999999999999
1273
+ - type: dot_accuracy
1274
+ value: 99.88019801980198
1275
+ - type: dot_accuracy_threshold
1276
+ value: 76.67686343193054
1277
+ - type: dot_ap
1278
+ value: 97.23260568085786
1279
+ - type: dot_f1
1280
+ value: 93.91824526420737
1281
+ - type: dot_f1_threshold
1282
+ value: 75.8271336555481
1283
+ - type: dot_precision
1284
+ value: 93.63817097415506
1285
+ - type: dot_recall
1286
+ value: 94.19999999999999
1287
+ - type: euclidean_accuracy
1288
+ value: 99.88019801980198
1289
+ - type: euclidean_accuracy_threshold
1290
+ value: 68.29807758331299
1291
+ - type: euclidean_ap
1292
+ value: 97.23259982599497
1293
+ - type: euclidean_f1
1294
+ value: 93.91824526420737
1295
+ - type: euclidean_f1_threshold
1296
+ value: 69.53110694885254
1297
+ - type: euclidean_precision
1298
+ value: 93.63817097415506
1299
+ - type: euclidean_recall
1300
+ value: 94.19999999999999
1301
+ - type: manhattan_accuracy
1302
+ value: 99.87821782178217
1303
+ - type: manhattan_accuracy_threshold
1304
+ value: 3482.6908111572266
1305
+ - type: manhattan_ap
1306
+ value: 97.26819027722253
1307
+ - type: manhattan_f1
1308
+ value: 93.92592592592592
1309
+ - type: manhattan_f1_threshold
1310
+ value: 3555.5641174316406
1311
+ - type: manhattan_precision
1312
+ value: 92.78048780487805
1313
+ - type: manhattan_recall
1314
+ value: 95.1
1315
+ - type: max_accuracy
1316
+ value: 99.88019801980198
1317
+ - type: max_ap
1318
+ value: 97.26819027722253
1319
+ - type: max_f1
1320
+ value: 93.92592592592592
1321
+ task:
1322
+ type: PairClassification
1323
+ - dataset:
1324
+ config: default
1325
+ name: MTEB StackExchangeClustering
1326
+ revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
1327
+ split: test
1328
+ type: mteb/stackexchange-clustering
1329
+ metrics:
1330
+ - type: main_score
1331
+ value: 81.32419328350603
1332
+ - type: v_measure
1333
+ value: 81.32419328350603
1334
+ - type: v_measure_std
1335
+ value: 2.666861121694755
1336
+ task:
1337
+ type: Clustering
1338
+ - dataset:
1339
+ config: default
1340
+ name: MTEB StackExchangeClusteringP2P
1341
+ revision: 815ca46b2622cec33ccafc3735d572c266efdb44
1342
+ split: test
1343
+ type: mteb/stackexchange-clustering-p2p
1344
+ metrics:
1345
+ - type: main_score
1346
+ value: 46.048387963107565
1347
+ - type: v_measure
1348
+ value: 46.048387963107565
1349
+ - type: v_measure_std
1350
+ value: 1.4102848576321703
1351
+ task:
1352
+ type: Clustering
1353
+ - dataset:
1354
+ config: default
1355
+ name: MTEB StackOverflowDupQuestions
1356
+ revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
1357
+ split: test
1358
+ type: mteb/stackoverflowdupquestions-reranking
1359
+ metrics:
1360
+ - type: main_score
1361
+ value: 56.70574900554072
1362
+ - type: map
1363
+ value: 56.70574900554072
1364
+ - type: mrr
1365
+ value: 57.517109116373824
1366
+ task:
1367
+ type: Reranking
1368
+ - dataset:
1369
+ config: default
1370
+ name: MTEB SummEval
1371
+ revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
1372
+ split: test
1373
+ type: mteb/summeval
1374
+ metrics:
1375
+ - type: main_score
1376
+ value: 30.76932903185174
1377
+ - type: cosine_spearman
1378
+ value: 30.76932903185174
1379
+ - type: spearman
1380
+ value: 30.76932903185174
1381
+ task:
1382
+ type: Summarization
1383
+ - dataset:
1384
+ config: default
1385
+ name: MTEB TRECCOVID
1386
+ revision: bb9466bac8153a0349341eb1b22e06409e78ef4e
1387
+ split: test
1388
+ type: mteb/trec-covid
1389
+ metrics:
1390
+ - type: main_score
1391
+ value: 79.07987651251462
1392
+ - type: ndcg_at_1
1393
+ value: 83.0
1394
+ - type: ndcg_at_3
1395
+ value: 79.86598407528447
1396
+ - type: ndcg_at_5
1397
+ value: 79.27684428714952
1398
+ - type: ndcg_at_10
1399
+ value: 79.07987651251462
1400
+ - type: ndcg_at_100
1401
+ value: 64.55029164391163
1402
+ - type: ndcg_at_1000
1403
+ value: 59.42333857860492
1404
+ - type: map_at_1
1405
+ value: 0.226053732680979
1406
+ - type: map_at_3
1407
+ value: 0.644034626013194
1408
+ - type: map_at_5
1409
+ value: 1.045196967937728
1410
+ - type: map_at_10
1411
+ value: 2.0197496659905085
1412
+ - type: map_at_100
1413
+ value: 13.316018005224159
1414
+ - type: map_at_1000
1415
+ value: 33.784766957424104
1416
+ - type: precision_at_1
1417
+ value: 88.0
1418
+ - type: precision_at_3
1419
+ value: 86.66666666666667
1420
+ - type: precision_at_5
1421
+ value: 85.20000000000002
1422
+ - type: precision_at_10
1423
+ value: 84.19999999999997
1424
+ - type: precision_at_100
1425
+ value: 67.88000000000001
1426
+ - type: precision_at_1000
1427
+ value: 26.573999999999998
1428
+ - type: recall_at_1
1429
+ value: 0.226053732680979
1430
+ - type: recall_at_3
1431
+ value: 0.6754273711472734
1432
+ - type: recall_at_5
1433
+ value: 1.1168649828059245
1434
+ - type: recall_at_10
1435
+ value: 2.2215081031265207
1436
+ - type: recall_at_100
1437
+ value: 16.694165236664727
1438
+ - type: recall_at_1000
1439
+ value: 56.7022214857503
1440
+ task:
1441
+ type: Retrieval
1442
+ - dataset:
1443
+ config: default
1444
+ name: MTEB Touche2020
1445
+ revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f
1446
+ split: test
1447
+ type: mteb/touche2020
1448
+ metrics:
1449
+ - type: main_score
1450
+ value: 30.47934263207554
1451
+ - type: ndcg_at_1
1452
+ value: 33.6734693877551
1453
+ - type: ndcg_at_3
1454
+ value: 34.36843900446739
1455
+ - type: ndcg_at_5
1456
+ value: 32.21323786731918
1457
+ - type: ndcg_at_10
1458
+ value: 30.47934263207554
1459
+ - type: ndcg_at_100
1460
+ value: 41.49598869753928
1461
+ - type: ndcg_at_1000
1462
+ value: 52.32963949183662
1463
+ - type: map_at_1
1464
+ value: 3.0159801678718168
1465
+ - type: map_at_3
1466
+ value: 7.13837927642557
1467
+ - type: map_at_5
1468
+ value: 9.274004610363466
1469
+ - type: map_at_10
1470
+ value: 12.957368366814324
1471
+ - type: map_at_100
1472
+ value: 19.3070585127604
1473
+ - type: map_at_1000
1474
+ value: 20.809777161133532
1475
+ - type: precision_at_1
1476
+ value: 34.69387755102041
1477
+ - type: precision_at_3
1478
+ value: 36.054421768707485
1479
+ - type: precision_at_5
1480
+ value: 32.24489795918368
1481
+ - type: precision_at_10
1482
+ value: 27.142857142857146
1483
+ - type: precision_at_100
1484
+ value: 8.326530612244898
1485
+ - type: precision_at_1000
1486
+ value: 1.5755102040816336
1487
+ - type: recall_at_1
1488
+ value: 3.0159801678718168
1489
+ - type: recall_at_3
1490
+ value: 8.321771388428257
1491
+ - type: recall_at_5
1492
+ value: 11.737532394366069
1493
+ - type: recall_at_10
1494
+ value: 19.49315139822179
1495
+ - type: recall_at_100
1496
+ value: 50.937064145519685
1497
+ - type: recall_at_1000
1498
+ value: 83.4358283484675
1499
+ task:
1500
+ type: Retrieval
1501
+ - dataset:
1502
+ config: default
1503
+ name: MTEB ToxicConversationsClassification
1504
+ revision: edfaf9da55d3dd50d43143d90c1ac476895ae6de
1505
+ split: test
1506
+ type: mteb/toxic_conversations_50k
1507
+ metrics:
1508
+ - type: accuracy
1509
+ value: 93.173828125
1510
+ - type: ap
1511
+ value: 46.040184641424396
1512
+ - type: f1
1513
+ value: 80.77280549412752
1514
+ - type: main_score
1515
+ value: 93.173828125
1516
+ task:
1517
+ type: Classification
1518
+ - dataset:
1519
+ config: default
1520
+ name: MTEB TweetSentimentExtractionClassification
1521
+ revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
1522
+ split: test
1523
+ type: mteb/tweet_sentiment_extraction
1524
+ metrics:
1525
+ - type: accuracy
1526
+ value: 79.9320882852292
1527
+ - type: f1
1528
+ value: 80.22638685975485
1529
+ - type: main_score
1530
+ value: 79.9320882852292
1531
+ task:
1532
+ type: Classification
1533
+ - dataset:
1534
+ config: default
1535
+ name: MTEB TwentyNewsgroupsClustering
1536
+ revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
1537
+ split: test
1538
+ type: mteb/twentynewsgroups-clustering
1539
+ metrics:
1540
+ - type: main_score
1541
+ value: 68.98152919711418
1542
+ - type: v_measure
1543
+ value: 68.98152919711418
1544
+ - type: v_measure_std
1545
+ value: 1.2519720970652428
1546
+ task:
1547
+ type: Clustering
1548
+ - dataset:
1549
+ config: default
1550
+ name: MTEB TwitterSemEval2015
1551
+ revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
1552
+ split: test
1553
+ type: mteb/twittersemeval2015-pairclassification
1554
+ metrics:
1555
+ - type: main_score
1556
+ value: 79.34189681158234
1557
+ - type: cos_sim_accuracy
1558
+ value: 87.68552184538356
1559
+ - type: cos_sim_accuracy_threshold
1560
+ value: 76.06316804885864
1561
+ - type: cos_sim_ap
1562
+ value: 79.34189149773933
1563
+ - type: cos_sim_f1
1564
+ value: 72.16386554621849
1565
+ - type: cos_sim_f1_threshold
1566
+ value: 73.62890243530273
1567
+ - type: cos_sim_precision
1568
+ value: 71.82435964453737
1569
+ - type: cos_sim_recall
1570
+ value: 72.5065963060686
1571
+ - type: dot_accuracy
1572
+ value: 87.68552184538356
1573
+ - type: dot_accuracy_threshold
1574
+ value: 76.06316208839417
1575
+ - type: dot_ap
1576
+ value: 79.34189231911259
1577
+ - type: dot_f1
1578
+ value: 72.16386554621849
1579
+ - type: dot_f1_threshold
1580
+ value: 73.62889647483826
1581
+ - type: dot_precision
1582
+ value: 71.82435964453737
1583
+ - type: dot_recall
1584
+ value: 72.5065963060686
1585
+ - type: euclidean_accuracy
1586
+ value: 87.68552184538356
1587
+ - type: euclidean_accuracy_threshold
1588
+ value: 69.19080018997192
1589
+ - type: euclidean_ap
1590
+ value: 79.34189681158234
1591
+ - type: euclidean_f1
1592
+ value: 72.16386554621849
1593
+ - type: euclidean_f1_threshold
1594
+ value: 72.62383103370667
1595
+ - type: euclidean_precision
1596
+ value: 71.82435964453737
1597
+ - type: euclidean_recall
1598
+ value: 72.5065963060686
1599
+ - type: manhattan_accuracy
1600
+ value: 87.661679680515
1601
+ - type: manhattan_accuracy_threshold
1602
+ value: 3408.807373046875
1603
+ - type: manhattan_ap
1604
+ value: 79.29617544165136
1605
+ - type: manhattan_f1
1606
+ value: 72.1957671957672
1607
+ - type: manhattan_f1_threshold
1608
+ value: 3597.7684020996094
1609
+ - type: manhattan_precision
1610
+ value: 72.38726790450929
1611
+ - type: manhattan_recall
1612
+ value: 72.00527704485488
1613
+ - type: max_accuracy
1614
+ value: 87.68552184538356
1615
+ - type: max_ap
1616
+ value: 79.34189681158234
1617
+ - type: max_f1
1618
+ value: 72.1957671957672
1619
+ task:
1620
+ type: PairClassification
1621
+ - dataset:
1622
+ config: default
1623
+ name: MTEB TwitterURLCorpus
1624
+ revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
1625
+ split: test
1626
+ type: mteb/twitterurlcorpus-pairclassification
1627
+ metrics:
1628
+ - type: main_score
1629
+ value: 87.8635519535718
1630
+ - type: cos_sim_accuracy
1631
+ value: 89.80672953778088
1632
+ - type: cos_sim_accuracy_threshold
1633
+ value: 73.09532165527344
1634
+ - type: cos_sim_ap
1635
+ value: 87.84251379545145
1636
+ - type: cos_sim_f1
1637
+ value: 80.25858884373845
1638
+ - type: cos_sim_f1_threshold
1639
+ value: 70.57080268859863
1640
+ - type: cos_sim_precision
1641
+ value: 77.14103110353643
1642
+ - type: cos_sim_recall
1643
+ value: 83.63874345549738
1644
+ - type: dot_accuracy
1645
+ value: 89.80672953778088
1646
+ - type: dot_accuracy_threshold
1647
+ value: 73.09532761573792
1648
+ - type: dot_ap
1649
+ value: 87.84251881260793
1650
+ - type: dot_f1
1651
+ value: 80.25858884373845
1652
+ - type: dot_f1_threshold
1653
+ value: 70.57079076766968
1654
+ - type: dot_precision
1655
+ value: 77.14103110353643
1656
+ - type: dot_recall
1657
+ value: 83.63874345549738
1658
+ - type: euclidean_accuracy
1659
+ value: 89.80672953778088
1660
+ - type: euclidean_accuracy_threshold
1661
+ value: 73.3548641204834
1662
+ - type: euclidean_ap
1663
+ value: 87.84251335039049
1664
+ - type: euclidean_f1
1665
+ value: 80.25858884373845
1666
+ - type: euclidean_f1_threshold
1667
+ value: 76.71923041343689
1668
+ - type: euclidean_precision
1669
+ value: 77.14103110353643
1670
+ - type: euclidean_recall
1671
+ value: 83.63874345549738
1672
+ - type: manhattan_accuracy
1673
+ value: 89.78150347343501
1674
+ - type: manhattan_accuracy_threshold
1675
+ value: 3702.7603149414062
1676
+ - type: manhattan_ap
1677
+ value: 87.8635519535718
1678
+ - type: manhattan_f1
1679
+ value: 80.27105660516332
1680
+ - type: manhattan_f1_threshold
1681
+ value: 3843.5962677001953
1682
+ - type: manhattan_precision
1683
+ value: 76.9361101306036
1684
+ - type: manhattan_recall
1685
+ value: 83.90822297505389
1686
+ - type: max_accuracy
1687
+ value: 89.80672953778088
1688
+ - type: max_ap
1689
+ value: 87.8635519535718
1690
+ - type: max_f1
1691
+ value: 80.27105660516332
1692
+ task:
1693
+ type: PairClassification
1694
+ ---
1695
+
1696
+
1697
+ <h1 align="center">FlagEmbedding</h1>
1698
+
1699
+
1700
+
1701
+
1702
+ For more details please refer to our Github: [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding).
1703
+
1704
+ **BGE-EN-ICL** primarily demonstrates the following capabilities:
1705
+ - In-context learning ability: By providing few-shot examples in the query, it can significantly enhance the model's ability to handle new tasks.
1706
+ - Outstanding performance: The model has achieved state-of-the-art (SOTA) performance on both BEIR and AIR-Bench.
1707
+
1708
+
1709
+ ## 📑 Open-source Plan
1710
+
1711
+ - [x] Checkpoint
1712
+ - [x] Training Data
1713
+ - [x] Technical Report
1714
+ - [ ] Evaluation Pipeline
1715
+
1716
+ The technical report for **BGE-EN-ICL** can be found in [Making Text Embedders Few-Shot Learners](https://arxiv.org/abs/2409.15700)
1717
+
1718
+ ## Data List
1719
+
1720
+ | Data | Introduction |
1721
+ | ------------------------------------------------------------ | ------------------------------------------------------------ |
1722
+ | [public-data](https://huggingface.co/datasets/cfli/bge-e5data) | Public data identical to [e5-mistral](https://huggingface.co/intfloat/e5-mistral-7b-instruct) |
1723
+ | [full-data](https://huggingface.co/datasets/cfli/bge-full-data) | The full dataset we used for training |
1724
+
1725
+ ## Usage
1726
+
1727
+ ### Using FlagEmbedding
1728
+ ```
1729
+ git clone https://github.com/FlagOpen/FlagEmbedding.git
1730
+ cd FlagEmbedding
1731
+ pip install -e .
1732
+ ```
1733
+
1734
+ ```python
1735
+ from FlagEmbedding import FlagICLModel
1736
+ queries = ["how much protein should a female eat", "summit define"]
1737
+ documents = [
1738
+ "As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
1739
+ "Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments."
1740
+ ]
1741
+ examples = [
1742
+ {'instruct': 'Given a web search query, retrieve relevant passages that answer the query.',
1743
+ 'query': 'what is a virtual interface',
1744
+ 'response': "A virtual interface is a software-defined abstraction that mimics the behavior and characteristics of a physical network interface. It allows multiple logical network connections to share the same physical network interface, enabling efficient utilization of network resources. Virtual interfaces are commonly used in virtualization technologies such as virtual machines and containers to provide network connectivity without requiring dedicated hardware. They facilitate flexible network configurations and help in isolating network traffic for security and management purposes."},
1745
+ {'instruct': 'Given a web search query, retrieve relevant passages that answer the query.',
1746
+ 'query': 'causes of back pain in female for a week',
1747
+ 'response': "Back pain in females lasting a week can stem from various factors. Common causes include muscle strain due to lifting heavy objects or improper posture, spinal issues like herniated discs or osteoporosis, menstrual cramps causing referred pain, urinary tract infections, or pelvic inflammatory disease. Pregnancy-related changes can also contribute. Stress and lack of physical activity may exacerbate symptoms. Proper diagnosis by a healthcare professional is crucial for effective treatment and management."}
1748
+ ]
1749
+ model = FlagICLModel('BAAI/bge-en-icl',
1750
+ query_instruction_for_retrieval="Given a web search query, retrieve relevant passages that answer the query.",
1751
+ examples_for_task=examples, # set `examples_for_task=None` to use model without examples
1752
+ use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation
1753
+ embeddings_1 = model.encode_queries(queries)
1754
+ embeddings_2 = model.encode_corpus(documents)
1755
+ similarity = embeddings_1 @ embeddings_2.T
1756
+ print(similarity)
1757
+ ```
1758
+
1759
+ By default, FlagICLModel will use all available GPUs when encoding. Please set `os.environ["CUDA_VISIBLE_DEVICES"]` to select specific GPUs.
1760
+ You also can set `os.environ["CUDA_VISIBLE_DEVICES"]=""` to make all GPUs unavailable.
1761
+
1762
+
1763
+ ### Using HuggingFace Transformers
1764
+
1765
+ With the transformers package, you can use the model like this: First, you pass your input through the transformer model, then you select the last hidden state of the first token (i.e., [CLS]) as the sentence embedding.
1766
+
1767
+ ```python
1768
+ import torch
1769
+ import torch.nn.functional as F
1770
+
1771
+ from torch import Tensor
1772
+ from transformers import AutoTokenizer, AutoModel
1773
+
1774
+
1775
+ def last_token_pool(last_hidden_states: Tensor,
1776
+ attention_mask: Tensor) -> Tensor:
1777
+ left_padding = (attention_mask[:, -1].sum() == attention_mask.shape[0])
1778
+ if left_padding:
1779
+ return last_hidden_states[:, -1]
1780
+ else:
1781
+ sequence_lengths = attention_mask.sum(dim=1) - 1
1782
+ batch_size = last_hidden_states.shape[0]
1783
+ return last_hidden_states[torch.arange(batch_size, device=last_hidden_states.device), sequence_lengths]
1784
+
1785
+
1786
+ def get_detailed_instruct(task_description: str, query: str) -> str:
1787
+ return f'<instruct>{task_description}\n<query>{query}'
1788
+
1789
+ def get_detailed_example(task_description: str, query: str, response: str) -> str:
1790
+ return f'<instruct>{task_description}\n<query>{query}\n<response>{response}'
1791
+
1792
+ def get_new_queries(queries, query_max_len, examples_prefix, tokenizer):
1793
+ inputs = tokenizer(
1794
+ queries,
1795
+ max_length=query_max_len - len(tokenizer('<s>', add_special_tokens=False)['input_ids']) - len(
1796
+ tokenizer('\n<response></s>', add_special_tokens=False)['input_ids']),
1797
+ return_token_type_ids=False,
1798
+ truncation=True,
1799
+ return_tensors=None,
1800
+ add_special_tokens=False
1801
+ )
1802
+ prefix_ids = tokenizer(examples_prefix, add_special_tokens=False)['input_ids']
1803
+ suffix_ids = tokenizer('\n<response>', add_special_tokens=False)['input_ids']
1804
+ new_max_length = (len(prefix_ids) + len(suffix_ids) + query_max_len + 8) // 8 * 8 + 8
1805
+ new_queries = tokenizer.batch_decode(inputs['input_ids'])
1806
+ for i in range(len(new_queries)):
1807
+ new_queries[i] = examples_prefix + new_queries[i] + '\n<response>'
1808
+ return new_max_length, new_queries
1809
+
1810
+ task = 'Given a web search query, retrieve relevant passages that answer the query.'
1811
+ examples = [
1812
+ {'instruct': 'Given a web search query, retrieve relevant passages that answer the query.',
1813
+ 'query': 'what is a virtual interface',
1814
+ 'response': "A virtual interface is a software-defined abstraction that mimics the behavior and characteristics of a physical network interface. It allows multiple logical network connections to share the same physical network interface, enabling efficient utilization of network resources. Virtual interfaces are commonly used in virtualization technologies such as virtual machines and containers to provide network connectivity without requiring dedicated hardware. They facilitate flexible network configurations and help in isolating network traffic for security and management purposes."},
1815
+ {'instruct': 'Given a web search query, retrieve relevant passages that answer the query.',
1816
+ 'query': 'causes of back pain in female for a week',
1817
+ 'response': "Back pain in females lasting a week can stem from various factors. Common causes include muscle strain due to lifting heavy objects or improper posture, spinal issues like herniated discs or osteoporosis, menstrual cramps causing referred pain, urinary tract infections, or pelvic inflammatory disease. Pregnancy-related changes can also contribute. Stress and lack of physical activity may exacerbate symptoms. Proper diagnosis by a healthcare professional is crucial for effective treatment and management."}
1818
+ ]
1819
+ examples = [get_detailed_example(e['instruct'], e['query'], e['response']) for e in examples]
1820
+ examples_prefix = '\n\n'.join(examples) + '\n\n' # if there not exists any examples, just set examples_prefix = ''
1821
+ queries = [
1822
+ get_detailed_instruct(task, 'how much protein should a female eat'),
1823
+ get_detailed_instruct(task, 'summit define')
1824
+ ]
1825
+ documents = [
1826
+ "As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
1827
+ "Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments."
1828
+ ]
1829
+ query_max_len, doc_max_len = 512, 512
1830
+
1831
+ tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-en-icl')
1832
+ model = AutoModel.from_pretrained('BAAI/bge-en-icl')
1833
+ model.eval()
1834
+
1835
+ new_query_max_len, new_queries = get_new_queries(queries, query_max_len, examples_prefix, tokenizer)
1836
+
1837
+ query_batch_dict = tokenizer(new_queries, max_length=new_query_max_len, padding=True, truncation=True, return_tensors='pt')
1838
+ doc_batch_dict = tokenizer(documents, max_length=doc_max_len, padding=True, truncation=True, return_tensors='pt')
1839
+
1840
+ with torch.no_grad():
1841
+ query_outputs = model(**query_batch_dict)
1842
+ query_embeddings = last_token_pool(query_outputs.last_hidden_state, query_batch_dict['attention_mask'])
1843
+ doc_outputs = model(**doc_batch_dict)
1844
+ doc_embeddings = last_token_pool(doc_outputs.last_hidden_state, doc_batch_dict['attention_mask'])
1845
+
1846
+ # normalize embeddings
1847
+ query_embeddings = F.normalize(query_embeddings, p=2, dim=1)
1848
+ doc_embeddings = F.normalize(doc_embeddings, p=2, dim=1)
1849
+ scores = (query_embeddings @ doc_embeddings.T) * 100
1850
+ print(scores.tolist())
1851
+ ```
1852
+
1853
+
1854
+ ## Evaluation
1855
+
1856
+ `bge-en-icl` achieve **state-of-the-art performance on both MTEB and AIR-Bench leaderboard!**
1857
+
1858
+ - **[MTEB](https://huggingface.co/spaces/mteb/leaderboard)**:
1859
+
1860
+ ![BEIR](./results/MTEB.png)
1861
+
1862
+ - **[BEIR](https://huggingface.co/spaces/mteb/leaderboard)**:
1863
+
1864
+ ![BEIR](./results/BEIR.png)
1865
+
1866
+ - **[AIR-Bench](https://huggingface.co/spaces/AIR-Bench/leaderboard)**:
1867
+
1868
+ **QA (en, nDCG@10):**
1869
+
1870
+ | AIR-Bench_24.04 | wiki | web | news | healthcare | law | finance | arxiv | msmarco | ALL (8) |
1871
+ | :--------------------------: | :-------: | :-------: | :-------: | :--------: | :-------: | :-------: | :-------: | :-------: | :-------: |
1872
+ | **e5-mistral-7b-instruct** | 61.67 | 44.41 | 48.18 | 56.32 | 19.32 | 54.79 | 44.78 | 59.03 | 48.56 |
1873
+ | **SFR-Embedding-Mistral** | 63.46 | 51.27 | 52.21 | 58.76 | 23.27 | 56.94 | 47.75 | 58.99 | 51.58 |
1874
+ | **NV-Embed-v1** | 62.84 | 50.42 | 51.46 | 58.53 | 20.65 | 49.89 | 46.10 | 60.27 | 50.02 |
1875
+ | **Linq-Embed-Mistral** | 61.04 | 48.41 | 49.44 | **60.18** | 20.34 | 50.04 | 47.56 | 60.50 | 49.69 |
1876
+ | **gte-Qwen2-7B-instruct** | 63.46 | 51.20 | 54.07 | 54.20 | 22.31 | **58.20** | 40.27 | 58.39 | 50.26 |
1877
+ | **stella_en_1.5B_v5** | 61.99 | 50.88 | 53.87 | 58.81 | 23.22 | 57.26 | 44.81 | 61.38 | 51.53 |
1878
+ | **bge-en-icl zero-shot** | 64.61 | 54.40 | 55.11 | 57.25 | 25.10 | 54.81 | 48.46 | 63.71 | 52.93 |
1879
+ | **bge-en-icl few-shot** | **64.94** | **55.11** | **56.02** | 58.85 | **28.29** | 57.16 | **50.04** | **64.50** | **54.36** |
1880
+
1881
+ **Long-Doc (en, Recall@10):**
1882
+
1883
+ | AIR-Bench_24.04 | arxiv (4) | book (2) | healthcare (5) | law (4) | ALL (15) |
1884
+ | :--------------------------: | :-------: | :-------: | :------------: | :-------: | :-------: |
1885
+ | **text-embedding-3-large** | 74.53 | 73.16 | 65.83 | 64.47 | 68.77 |
1886
+ | **e5-mistral-7b-instruct** | 72.14 | 72.44 | 68.44 | 62.92 | 68.49 |
1887
+ | **SFR-Embedding-Mistral** | 72.79 | 72.41 | 67.94 | 64.83 | 69.00 |
1888
+ | **NV-Embed-v1** | 77.65 | 75.49 | 72.38 | **69.55** | 73.45 |
1889
+ | **Linq-Embed-Mistral** | 75.46 | 73.81 | 71.58 | 68.58 | 72.11 |
1890
+ | **gte-Qwen2-7B-instruct** | 63.93 | 68.51 | 65.59 | 65.26 | 65.45 |
1891
+ | **stella_en_1.5B_v5** | 73.17 | 74.38 | 70.02 | 69.32 | 71.25 |
1892
+ | **bge-en-icl zero-shot** | 78.30 | 78.21 | 73.65 | 67.09 | 73.75 |
1893
+ | **bge-en-icl few-shot** | **79.63** | **79.36** | **74.80** | 67.79 | **74.83** |
1894
+
1895
+
1896
+ ## Model List
1897
+
1898
+ `bge` is short for `BAAI general embedding`.
1899
+
1900
+ | Model | Language | | Description | query instruction for retrieval [1] |
1901
+ |:--------------------------------------------------------------------------|:-------------------:|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:----------------------------------------------------------------------------------------------------------------------------------------------------:|:--------:|
1902
+ | [BAAI/bge-en-icl](https://huggingface.co/BAAI/bge-en-icl) | English | - | A LLM-based embedding model with in-context learning capabilities, which can fully leverage the model's potential based on a few shot examples | Provide instructions and few-shot examples freely based on the given task. |
1903
+ | [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) | Multilingual | [Inference](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/BGE_M3#usage) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/BGE_M3) | Multi-Functionality(dense retrieval, sparse retrieval, multi-vector(colbert)), Multi-Linguality, and Multi-Granularity(8192 tokens) | |
1904
+ | [BAAI/llm-embedder](https://huggingface.co/BAAI/llm-embedder) | English | [Inference](./FlagEmbedding/llm_embedder/README.md) [Fine-tune](./FlagEmbedding/llm_embedder/README.md) | a unified embedding model to support diverse retrieval augmentation needs for LLMs | See [README](./FlagEmbedding/llm_embedder/README.md) |
1905
+ | [BAAI/bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large) | Chinese and English | [Inference](#usage-for-reranker) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker) | a cross-encoder model which is more accurate but less efficient [2] | |
1906
+ | [BAAI/bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base) | Chinese and English | [Inference](#usage-for-reranker) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker) | a cross-encoder model which is more accurate but less efficient [2] | |
1907
+ | [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` |
1908
+ | [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` |
1909
+ | [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` |
1910
+ | [BAAI/bge-large-zh-v1.5](https://huggingface.co/BAAI/bge-large-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` |
1911
+ | [BAAI/bge-base-zh-v1.5](https://huggingface.co/BAAI/bge-base-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` |
1912
+ | [BAAI/bge-small-zh-v1.5](https://huggingface.co/BAAI/bge-small-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` |
1913
+ | [BAAI/bge-large-en](https://huggingface.co/BAAI/bge-large-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | :trophy: rank **1st** in [MTEB](https://huggingface.co/spaces/mteb/leaderboard) leaderboard | `Represent this sentence for searching relevant passages: ` |
1914
+ | [BAAI/bge-base-en](https://huggingface.co/BAAI/bge-base-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a base-scale model but with similar ability to `bge-large-en` | `Represent this sentence for searching relevant passages: ` |
1915
+ | [BAAI/bge-small-en](https://huggingface.co/BAAI/bge-small-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a small-scale model but with competitive performance | `Represent this sentence for searching relevant passages: ` |
1916
+ | [BAAI/bge-large-zh](https://huggingface.co/BAAI/bge-large-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | :trophy: rank **1st** in [C-MTEB](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB) benchmark | `为这个句子生成表示以用于检索相关文章:` |
1917
+ | [BAAI/bge-base-zh](https://huggingface.co/BAAI/bge-base-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a base-scale model but with similar ability to `bge-large-zh` | `为这个句子生成表示以用于检索相关文章:` |
1918
+ | [BAAI/bge-small-zh](https://huggingface.co/BAAI/bge-small-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a small-scale model but with competitive performance | `为这个句子生成表示以用于检索相关文章:` |
1919
+
1920
+
1921
+
1922
+
1923
+
1924
+ ## Citation
1925
+
1926
+ If you find this repository useful, please consider giving a star :star: and citation
1927
+
1928
+ ```
1929
+ @misc{li2024makingtextembeddersfewshot,
1930
+ title={Making Text Embedders Few-Shot Learners},
1931
+ author={Chaofan Li and MingHao Qin and Shitao Xiao and Jianlyu Chen and Kun Luo and Yingxia Shao and Defu Lian and Zheng Liu},
1932
+ year={2024},
1933
+ eprint={2409.15700},
1934
+ archivePrefix={arXiv},
1935
+ primaryClass={cs.IR},
1936
+ url={https://arxiv.org/abs/2409.15700},
1937
+ }
1938
+ @misc{bge_embedding,
1939
+ title={C-Pack: Packaged Resources To Advance General Chinese Embedding},
1940
+ author={Shitao Xiao and Zheng Liu and Peitian Zhang and Niklas Muennighoff},
1941
+ year={2023},
1942
+ eprint={2309.07597},
1943
+ archivePrefix={arXiv},
1944
+ primaryClass={cs.CL}
1945
+ }
1946
+ ```
1947
+
1948
+ ## License
1949
+ FlagEmbedding is licensed under the [MIT License](https://github.com/FlagOpen/FlagEmbedding/blob/master/LICENSE).
added_tokens.json ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ {
2
+ "<instruct>": 32000,
3
+ "<query>": 32001,
4
+ "<response>": 32002
5
+ }
config.json ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "/data/bge-en-icl",
3
+ "architectures": [
4
+ "MistralModel"
5
+ ],
6
+ "attention_dropout": 0.0,
7
+ "bos_token_id": 1,
8
+ "eos_token_id": 2,
9
+ "head_dim": 128,
10
+ "hidden_act": "silu",
11
+ "hidden_size": 4096,
12
+ "initializer_range": 0.02,
13
+ "intermediate_size": 14336,
14
+ "max_position_embeddings": 32768,
15
+ "model_type": "mistral",
16
+ "num_attention_heads": 32,
17
+ "num_hidden_layers": 32,
18
+ "num_key_value_heads": 8,
19
+ "rms_norm_eps": 1e-05,
20
+ "rope_theta": 10000.0,
21
+ "sliding_window": 4096,
22
+ "tie_word_embeddings": false,
23
+ "torch_dtype": "bfloat16",
24
+ "transformers_version": "4.45.0",
25
+ "use_cache": false,
26
+ "vocab_size": 32003
27
+ }
model-00001-of-00003.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a796fcebe815b95257bfaf88e6f3b5cd0e4e90aa29291cd67bb40768436fc39a
3
+ size 4943186336
model-00002-of-00003.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f37443d0d6c3103b5414c7907324be85dfcd0184477dedac7989f9563bc22d10
3
+ size 4999818704
model-00003-of-00003.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c69341be083f2eb78f67986df53189566348dc3a85a28cb21881e3dc38869eed
3
+ size 4278371712
model.safetensors.index.json ADDED
@@ -0,0 +1,297 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "metadata": {
3
+ "total_size": 14221344768
4
+ },
5
+ "weight_map": {
6
+ "embed_tokens.weight": "model-00001-of-00003.safetensors",
7
+ "layers.0.input_layernorm.weight": "model-00001-of-00003.safetensors",
8
+ "layers.0.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
9
+ "layers.0.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
10
+ "layers.0.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
11
+ "layers.0.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
12
+ "layers.0.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
13
+ "layers.0.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
14
+ "layers.0.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
15
+ "layers.0.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
16
+ "layers.1.input_layernorm.weight": "model-00001-of-00003.safetensors",
17
+ "layers.1.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
18
+ "layers.1.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
19
+ "layers.1.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
20
+ "layers.1.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
21
+ "layers.1.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
22
+ "layers.1.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
23
+ "layers.1.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
24
+ "layers.1.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
25
+ "layers.10.input_layernorm.weight": "model-00002-of-00003.safetensors",
26
+ "layers.10.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
27
+ "layers.10.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
28
+ "layers.10.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
29
+ "layers.10.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
30
+ "layers.10.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
31
+ "layers.10.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
32
+ "layers.10.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
33
+ "layers.10.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
34
+ "layers.11.input_layernorm.weight": "model-00002-of-00003.safetensors",
35
+ "layers.11.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
36
+ "layers.11.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
37
+ "layers.11.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
38
+ "layers.11.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
39
+ "layers.11.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
40
+ "layers.11.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
41
+ "layers.11.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
42
+ "layers.11.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
43
+ "layers.12.input_layernorm.weight": "model-00002-of-00003.safetensors",
44
+ "layers.12.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
45
+ "layers.12.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
46
+ "layers.12.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
47
+ "layers.12.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
48
+ "layers.12.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
49
+ "layers.12.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
50
+ "layers.12.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
51
+ "layers.12.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
52
+ "layers.13.input_layernorm.weight": "model-00002-of-00003.safetensors",
53
+ "layers.13.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
54
+ "layers.13.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
55
+ "layers.13.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
56
+ "layers.13.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
57
+ "layers.13.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
58
+ "layers.13.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
59
+ "layers.13.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
60
+ "layers.13.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
61
+ "layers.14.input_layernorm.weight": "model-00002-of-00003.safetensors",
62
+ "layers.14.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
63
+ "layers.14.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
64
+ "layers.14.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
65
+ "layers.14.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
66
+ "layers.14.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
67
+ "layers.14.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
68
+ "layers.14.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
69
+ "layers.14.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
70
+ "layers.15.input_layernorm.weight": "model-00002-of-00003.safetensors",
71
+ "layers.15.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
72
+ "layers.15.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
73
+ "layers.15.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
74
+ "layers.15.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
75
+ "layers.15.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
76
+ "layers.15.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
77
+ "layers.15.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
78
+ "layers.15.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
79
+ "layers.16.input_layernorm.weight": "model-00002-of-00003.safetensors",
80
+ "layers.16.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
81
+ "layers.16.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
82
+ "layers.16.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
83
+ "layers.16.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
84
+ "layers.16.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
85
+ "layers.16.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
86
+ "layers.16.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
87
+ "layers.16.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
88
+ "layers.17.input_layernorm.weight": "model-00002-of-00003.safetensors",
89
+ "layers.17.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
90
+ "layers.17.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
91
+ "layers.17.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
92
+ "layers.17.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
93
+ "layers.17.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
94
+ "layers.17.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
95
+ "layers.17.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
96
+ "layers.17.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
97
+ "layers.18.input_layernorm.weight": "model-00002-of-00003.safetensors",
98
+ "layers.18.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
99
+ "layers.18.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
100
+ "layers.18.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
101
+ "layers.18.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
102
+ "layers.18.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
103
+ "layers.18.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
104
+ "layers.18.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
105
+ "layers.18.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
106
+ "layers.19.input_layernorm.weight": "model-00002-of-00003.safetensors",
107
+ "layers.19.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
108
+ "layers.19.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
109
+ "layers.19.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
110
+ "layers.19.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
111
+ "layers.19.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
112
+ "layers.19.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
113
+ "layers.19.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
114
+ "layers.19.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
115
+ "layers.2.input_layernorm.weight": "model-00001-of-00003.safetensors",
116
+ "layers.2.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
117
+ "layers.2.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
118
+ "layers.2.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
119
+ "layers.2.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
120
+ "layers.2.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
121
+ "layers.2.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
122
+ "layers.2.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
123
+ "layers.2.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
124
+ "layers.20.input_layernorm.weight": "model-00002-of-00003.safetensors",
125
+ "layers.20.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
126
+ "layers.20.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
127
+ "layers.20.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
128
+ "layers.20.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
129
+ "layers.20.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
130
+ "layers.20.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
131
+ "layers.20.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
132
+ "layers.20.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
133
+ "layers.21.input_layernorm.weight": "model-00002-of-00003.safetensors",
134
+ "layers.21.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
135
+ "layers.21.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
136
+ "layers.21.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
137
+ "layers.21.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
138
+ "layers.21.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
139
+ "layers.21.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
140
+ "layers.21.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
141
+ "layers.21.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
142
+ "layers.22.input_layernorm.weight": "model-00003-of-00003.safetensors",
143
+ "layers.22.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
144
+ "layers.22.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
145
+ "layers.22.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
146
+ "layers.22.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
147
+ "layers.22.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
148
+ "layers.22.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
149
+ "layers.22.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
150
+ "layers.22.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
151
+ "layers.23.input_layernorm.weight": "model-00003-of-00003.safetensors",
152
+ "layers.23.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
153
+ "layers.23.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
154
+ "layers.23.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
155
+ "layers.23.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
156
+ "layers.23.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
157
+ "layers.23.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
158
+ "layers.23.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
159
+ "layers.23.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
160
+ "layers.24.input_layernorm.weight": "model-00003-of-00003.safetensors",
161
+ "layers.24.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
162
+ "layers.24.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
163
+ "layers.24.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
164
+ "layers.24.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
165
+ "layers.24.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
166
+ "layers.24.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
167
+ "layers.24.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
168
+ "layers.24.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
169
+ "layers.25.input_layernorm.weight": "model-00003-of-00003.safetensors",
170
+ "layers.25.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
171
+ "layers.25.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
172
+ "layers.25.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
173
+ "layers.25.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
174
+ "layers.25.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
175
+ "layers.25.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
176
+ "layers.25.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
177
+ "layers.25.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
178
+ "layers.26.input_layernorm.weight": "model-00003-of-00003.safetensors",
179
+ "layers.26.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
180
+ "layers.26.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
181
+ "layers.26.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
182
+ "layers.26.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
183
+ "layers.26.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
184
+ "layers.26.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
185
+ "layers.26.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
186
+ "layers.26.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
187
+ "layers.27.input_layernorm.weight": "model-00003-of-00003.safetensors",
188
+ "layers.27.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
189
+ "layers.27.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
190
+ "layers.27.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
191
+ "layers.27.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
192
+ "layers.27.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
193
+ "layers.27.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
194
+ "layers.27.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
195
+ "layers.27.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
196
+ "layers.28.input_layernorm.weight": "model-00003-of-00003.safetensors",
197
+ "layers.28.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
198
+ "layers.28.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
199
+ "layers.28.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
200
+ "layers.28.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
201
+ "layers.28.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
202
+ "layers.28.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
203
+ "layers.28.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
204
+ "layers.28.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
205
+ "layers.29.input_layernorm.weight": "model-00003-of-00003.safetensors",
206
+ "layers.29.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
207
+ "layers.29.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
208
+ "layers.29.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
209
+ "layers.29.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
210
+ "layers.29.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
211
+ "layers.29.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
212
+ "layers.29.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
213
+ "layers.29.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
214
+ "layers.3.input_layernorm.weight": "model-00001-of-00003.safetensors",
215
+ "layers.3.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
216
+ "layers.3.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
217
+ "layers.3.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
218
+ "layers.3.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
219
+ "layers.3.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
220
+ "layers.3.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
221
+ "layers.3.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
222
+ "layers.3.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
223
+ "layers.30.input_layernorm.weight": "model-00003-of-00003.safetensors",
224
+ "layers.30.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
225
+ "layers.30.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
226
+ "layers.30.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
227
+ "layers.30.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
228
+ "layers.30.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
229
+ "layers.30.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
230
+ "layers.30.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
231
+ "layers.30.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
232
+ "layers.31.input_layernorm.weight": "model-00003-of-00003.safetensors",
233
+ "layers.31.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
234
+ "layers.31.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
235
+ "layers.31.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
236
+ "layers.31.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
237
+ "layers.31.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
238
+ "layers.31.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
239
+ "layers.31.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
240
+ "layers.31.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
241
+ "layers.4.input_layernorm.weight": "model-00001-of-00003.safetensors",
242
+ "layers.4.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
243
+ "layers.4.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
244
+ "layers.4.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
245
+ "layers.4.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
246
+ "layers.4.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
247
+ "layers.4.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
248
+ "layers.4.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
249
+ "layers.4.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
250
+ "layers.5.input_layernorm.weight": "model-00001-of-00003.safetensors",
251
+ "layers.5.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
252
+ "layers.5.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
253
+ "layers.5.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
254
+ "layers.5.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
255
+ "layers.5.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
256
+ "layers.5.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
257
+ "layers.5.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
258
+ "layers.5.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
259
+ "layers.6.input_layernorm.weight": "model-00001-of-00003.safetensors",
260
+ "layers.6.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
261
+ "layers.6.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
262
+ "layers.6.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
263
+ "layers.6.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
264
+ "layers.6.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
265
+ "layers.6.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
266
+ "layers.6.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
267
+ "layers.6.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
268
+ "layers.7.input_layernorm.weight": "model-00001-of-00003.safetensors",
269
+ "layers.7.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
270
+ "layers.7.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
271
+ "layers.7.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
272
+ "layers.7.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
273
+ "layers.7.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
274
+ "layers.7.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
275
+ "layers.7.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
276
+ "layers.7.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
277
+ "layers.8.input_layernorm.weight": "model-00001-of-00003.safetensors",
278
+ "layers.8.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
279
+ "layers.8.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
280
+ "layers.8.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
281
+ "layers.8.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
282
+ "layers.8.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
283
+ "layers.8.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
284
+ "layers.8.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
285
+ "layers.8.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
286
+ "layers.9.input_layernorm.weight": "model-00001-of-00003.safetensors",
287
+ "layers.9.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
288
+ "layers.9.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
289
+ "layers.9.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
290
+ "layers.9.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
291
+ "layers.9.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
292
+ "layers.9.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
293
+ "layers.9.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
294
+ "layers.9.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
295
+ "norm.weight": "model-00003-of-00003.safetensors"
296
+ }
297
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "<instruct>",
4
+ "<query>",
5
+ "<response>"
6
+ ],
7
+ "bos_token": {
8
+ "content": "<s>",
9
+ "lstrip": false,
10
+ "normalized": false,
11
+ "rstrip": false,
12
+ "single_word": false
13
+ },
14
+ "eos_token": {
15
+ "content": "</s>",
16
+ "lstrip": false,
17
+ "normalized": false,
18
+ "rstrip": false,
19
+ "single_word": false
20
+ },
21
+ "pad_token": {
22
+ "content": "<unk>",
23
+ "lstrip": false,
24
+ "normalized": false,
25
+ "rstrip": false,
26
+ "single_word": false
27
+ },
28
+ "unk_token": {
29
+ "content": "<unk>",
30
+ "lstrip": false,
31
+ "normalized": false,
32
+ "rstrip": false,
33
+ "single_word": false
34
+ }
35
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dadfd56d766715c61d2ef780a525ab43b8e6da4de6865bda3d95fdef5e134055
3
+ size 493443
tokenizer_config.json ADDED
@@ -0,0 +1,71 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": true,
3
+ "add_eos_token": true,
4
+ "add_prefix_space": true,
5
+ "added_tokens_decoder": {
6
+ "0": {
7
+ "content": "<unk>",
8
+ "lstrip": false,
9
+ "normalized": false,
10
+ "rstrip": false,
11
+ "single_word": false,
12
+ "special": true
13
+ },
14
+ "1": {
15
+ "content": "<s>",
16
+ "lstrip": false,
17
+ "normalized": false,
18
+ "rstrip": false,
19
+ "single_word": false,
20
+ "special": true
21
+ },
22
+ "2": {
23
+ "content": "</s>",
24
+ "lstrip": false,
25
+ "normalized": false,
26
+ "rstrip": false,
27
+ "single_word": false,
28
+ "special": true
29
+ },
30
+ "32000": {
31
+ "content": "<instruct>",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false,
36
+ "special": true
37
+ },
38
+ "32001": {
39
+ "content": "<query>",
40
+ "lstrip": false,
41
+ "normalized": false,
42
+ "rstrip": false,
43
+ "single_word": false,
44
+ "special": true
45
+ },
46
+ "32002": {
47
+ "content": "<response>",
48
+ "lstrip": false,
49
+ "normalized": false,
50
+ "rstrip": false,
51
+ "single_word": false,
52
+ "special": true
53
+ }
54
+ },
55
+ "additional_special_tokens": [
56
+ "<instruct>",
57
+ "<query>",
58
+ "<response>"
59
+ ],
60
+ "bos_token": "<s>",
61
+ "clean_up_tokenization_spaces": false,
62
+ "eos_token": "</s>",
63
+ "legacy": true,
64
+ "model_max_length": 1000000000000000019884624838656,
65
+ "pad_token": "<unk>",
66
+ "sp_model_kwargs": {},
67
+ "spaces_between_special_tokens": false,
68
+ "tokenizer_class": "LlamaTokenizer",
69
+ "unk_token": "<unk>",
70
+ "use_default_system_prompt": false
71
+ }