elderberry17 commited on
Commit
2fe0231
·
verified ·
1 Parent(s): eb0daa8

Upload folder using huggingface_hub

Browse files
Files changed (2) hide show
  1. 1_Pooling/config.json +1 -1
  2. README.md +4 -4
1_Pooling/config.json CHANGED
@@ -1,5 +1,5 @@
1
  {
2
- "word_embedding_dimension": 128,
3
  "pooling_mode_cls_token": true,
4
  "pooling_mode_mean_tokens": false,
5
  "pooling_mode_max_tokens": false,
 
1
  {
2
+ "word_embedding_dimension": 312,
3
  "pooling_mode_cls_token": true,
4
  "pooling_mode_mean_tokens": false,
5
  "pooling_mode_max_tokens": false,
README.md CHANGED
@@ -10,7 +10,7 @@ library_name: sentence-transformers
10
 
11
  # SentenceTransformer based on cointegrated/rubert-tiny
12
 
13
- This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [cointegrated/rubert-tiny](https://huggingface.co/cointegrated/rubert-tiny). It maps sentences & paragraphs to a 128-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
14
 
15
  ## Model Details
16
 
@@ -18,7 +18,7 @@ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [c
18
  - **Model Type:** Sentence Transformer
19
  - **Base model:** [cointegrated/rubert-tiny](https://huggingface.co/cointegrated/rubert-tiny) <!-- at revision 5441c5ea8026d4f6d7505ec004845409f1259fb1 -->
20
  - **Maximum Sequence Length:** 256 tokens
21
- - **Output Dimensionality:** 128 tokens
22
  - **Similarity Function:** Cosine Similarity
23
  <!-- - **Training Dataset:** Unknown -->
24
  <!-- - **Language:** Unknown -->
@@ -35,7 +35,7 @@ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [c
35
  ```
36
  SentenceTransformer(
37
  (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
38
- (1): Pooling({'word_embedding_dimension': 128, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
39
  )
40
  ```
41
 
@@ -63,7 +63,7 @@ sentences = [
63
  ]
64
  embeddings = model.encode(sentences)
65
  print(embeddings.shape)
66
- # [3, 128]
67
 
68
  # Get the similarity scores for the embeddings
69
  similarities = model.similarity(embeddings, embeddings)
 
10
 
11
  # SentenceTransformer based on cointegrated/rubert-tiny
12
 
13
+ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [cointegrated/rubert-tiny](https://huggingface.co/cointegrated/rubert-tiny). It maps sentences & paragraphs to a 312-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
14
 
15
  ## Model Details
16
 
 
18
  - **Model Type:** Sentence Transformer
19
  - **Base model:** [cointegrated/rubert-tiny](https://huggingface.co/cointegrated/rubert-tiny) <!-- at revision 5441c5ea8026d4f6d7505ec004845409f1259fb1 -->
20
  - **Maximum Sequence Length:** 256 tokens
21
+ - **Output Dimensionality:** 312 tokens
22
  - **Similarity Function:** Cosine Similarity
23
  <!-- - **Training Dataset:** Unknown -->
24
  <!-- - **Language:** Unknown -->
 
35
  ```
36
  SentenceTransformer(
37
  (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
38
+ (1): Pooling({'word_embedding_dimension': 312, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
39
  )
40
  ```
41
 
 
63
  ]
64
  embeddings = model.encode(sentences)
65
  print(embeddings.shape)
66
+ # [3, 312]
67
 
68
  # Get the similarity scores for the embeddings
69
  similarities = model.similarity(embeddings, embeddings)