yano0 commited on
Commit
a2f8027
·
verified ·
1 Parent(s): 64f5c2d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -14
README.md CHANGED
@@ -24,6 +24,7 @@ datasets:
24
  - hpprc/mqa-ja
25
  - google-research-datasets/paws-x
26
  base_model: pkshatech/GLuCoSE-base-ja
 
27
  ---
28
 
29
  # SentenceTransformer
@@ -42,14 +43,6 @@ This is a [sentence-transformers](https://www.SBERT.net) model trained. It maps
42
  <!-- - **Language:** Unknown -->
43
  <!-- - **License:** Unknown -->
44
 
45
- ### Full Model Architecture
46
-
47
- ```
48
- SentenceTransformer(
49
- (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: LukeModel
50
- (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
51
- )
52
- ```
53
 
54
  ## Usage
55
 
@@ -132,8 +125,8 @@ You can finetune this model on your own dataset.
132
  - Tokenizers: 0.19.1
133
  ## Benchmarks
134
 
135
- ## Zero-shot Search
136
- Evaluated with [MIRACL-ja](https://huggingface.co/datasets/miracl/miracl), [JQARA][https://huggingface.co/datasets/hotchpotch/JQaRA] and [MLDR-ja][https://huggingface.co/datasets/Shitao/MLDR].
137
 
138
  | model | size | MIRACL<br>Recall@5 | JQaRA<br>nDCG@10 | MLDR<br>nDCG@10 |
139
  |--------|--------|---------------------|-------------------|-------------------|
@@ -141,8 +134,8 @@ Evaluated with [MIRACL-ja](https://huggingface.co/datasets/miracl/miracl), [JQAR
141
  | GLuCoSE | 0.1B | 53.3 | 30.8 | 25.2 |
142
  | GLuCoSE v2 | 0.1B | 85.5 | 60.6 | 33.8 |
143
 
144
- ## JMTEB
145
- Evaluated with [JMTEB][https://github.com/sbintuitions/JMTEB].
146
  * Time-consuming [‘amazon_review_classification’, ‘mrtydi’, ‘jaqket’, ‘esci’] were excluded and evaluated.
147
  * The average is a macro-average per task.
148
 
@@ -153,11 +146,16 @@ Evaluated with [JMTEB][https://github.com/sbintuitions/JMTEB].
153
  | GLuCoSE v2 | 0.1B | 80.5 | 82.8 | 83.0 | 49.8 | 62.4 | 71.7 |
154
 
155
 
156
- ## Citation
 
157
 
158
- ### BibTeX
 
159
 
160
  <!--
 
 
 
161
  ## Glossary
162
 
163
  *Clearly define terms in order to be accessible across audiences.*
 
24
  - hpprc/mqa-ja
25
  - google-research-datasets/paws-x
26
  base_model: pkshatech/GLuCoSE-base-ja
27
+ license: apache-2.0
28
  ---
29
 
30
  # SentenceTransformer
 
43
  <!-- - **Language:** Unknown -->
44
  <!-- - **License:** Unknown -->
45
 
 
 
 
 
 
 
 
 
46
 
47
  ## Usage
48
 
 
125
  - Tokenizers: 0.19.1
126
  ## Benchmarks
127
 
128
+ ### Zero-shot Search
129
+ Evaluated with [MIRACL-ja](https://huggingface.co/datasets/miracl/miracl), [JQARA]https://huggingface.co/datasets/hotchpotch/JQaRA and [MLDR-ja](https://huggingface.co/datasets/Shitao/MLDR).
130
 
131
  | model | size | MIRACL<br>Recall@5 | JQaRA<br>nDCG@10 | MLDR<br>nDCG@10 |
132
  |--------|--------|---------------------|-------------------|-------------------|
 
134
  | GLuCoSE | 0.1B | 53.3 | 30.8 | 25.2 |
135
  | GLuCoSE v2 | 0.1B | 85.5 | 60.6 | 33.8 |
136
 
137
+ ### JMTEB
138
+ Evaluated with [JMTEB](https://github.com/sbintuitions/JMTEB).
139
  * Time-consuming [‘amazon_review_classification’, ‘mrtydi’, ‘jaqket’, ‘esci’] were excluded and evaluated.
140
  * The average is a macro-average per task.
141
 
 
146
  | GLuCoSE v2 | 0.1B | 80.5 | 82.8 | 83.0 | 49.8 | 62.4 | 71.7 |
147
 
148
 
149
+ ## Authors
150
+ Chihiro Yano, Go Mocho, Hideyuki Tachibana, Hiroto Takegawa, Yotaro Watanabe
151
 
152
+ ## License
153
+ This model is published under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
154
 
155
  <!--
156
+ ## Citation
157
+
158
+ ### BibTeX
159
  ## Glossary
160
 
161
  *Clearly define terms in order to be accessible across audiences.*