BrandonZYW commited on
Commit
89848d0
·
verified ·
1 Parent(s): 50ac8ae

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +81 -0
README.md CHANGED
@@ -1,3 +1,84 @@
1
  ---
2
  license: mit
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
+ datasets:
4
+ - KomeijiForce/Inbedder-Pretrain-Data
5
+ language:
6
+ - en
7
  ---
8
+
9
+ # [ACL2024] Answer is All You Need: Instruction-following Text Embedding via Answering the Question
10
+
11
+ InBedder🛌 is a text embedder that is designed to follow instructions. Instruction-following text embedder can capture characteristics of texts specified by user instructions. InBedder offers a novel viewpoint that treats the instruction as a question about the input text and encodes the expected answers to obtain the representation accordingly. We show that InBedder is aware of instructions with different evaluation tasks.
12
+
13
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64323dd503d81fa4d26deaf9/jLbqF-2uT8Aw9DsN7XCVG.png)
14
+
15
+ The following is a use case from [https://github.com/zhang-yu-wei/InBedder/blob/main/UseCase.ipynb](https://github.com/zhang-yu-wei/InBedder/blob/main/UseCase.ipynb)
16
+
17
+ ```python
18
+ import torch
19
+ from torch import nn
20
+ from torch.nn.functional import gelu, cosine_similarity
21
+ from transformers import AutoTokenizer, AutoModel, AutoModelForMaskedLM
22
+
23
+ import numpy as np
24
+
25
+ class InBedder():
26
+
27
+ def __init__(self, path='KomeijiForce/inbedder-roberta-large', device='cuda:0'):
28
+
29
+ model = AutoModelForMaskedLM.from_pretrained(path)
30
+
31
+ self.tokenizer = AutoTokenizer.from_pretrained(path)
32
+ self.model = model.roberta
33
+ self.dense = model.lm_head.dense
34
+ self.layer_norm = model.lm_head.layer_norm
35
+
36
+ self.device = torch.device(device)
37
+ self.model = self.model.to(self.device)
38
+ self.dense = self.dense.to(self.device)
39
+ self.layer_norm = self.layer_norm.to(self.device)
40
+
41
+ self.vocab = self.tokenizer.get_vocab()
42
+ self.vocab = {self.vocab[key]:key for key in self.vocab}
43
+
44
+ def encode(self, input_texts, instruction, n_mask):
45
+
46
+ if type(instruction) == str:
47
+ prompts = [instruction + self.tokenizer.mask_token*n_mask for input_text in input_texts]
48
+ elif type(instruction) == list:
49
+ prompts = [inst + self.tokenizer.mask_token*n_mask for inst in instruction]
50
+
51
+ inputs = self.tokenizer(input_texts, prompts, padding=True, truncation=True, return_tensors='pt').to(self.device)
52
+
53
+ mask = inputs.input_ids.eq(self.tokenizer.mask_token_id)
54
+
55
+ outputs = self.model(**inputs)
56
+
57
+ logits = outputs.last_hidden_state[mask]
58
+
59
+ logits = self.layer_norm(gelu(self.dense(logits)))
60
+
61
+ logits = logits.reshape(len(input_texts), n_mask, -1)
62
+
63
+ logits = logits.mean(1)
64
+
65
+ logits = (logits - logits.mean(1, keepdim=True)) / logits.std(1, keepdim=True)
66
+
67
+ return logits
68
+
69
+ inbedder = InBedder(path='KomeijiForce/inbedder-roberta-large', device='cpu')
70
+
71
+ texts = ["I love cat!", "I love dog!", "I dislike cat!"]
72
+ instruction = "What is the animal mentioned here?"
73
+ embeddings = inbedder.encode(texts, instruction, 3)
74
+
75
+ cosine_similarity(embeddings[:1], embeddings[1:], dim=1)
76
+ # tensor([0.9374, 0.9917], grad_fn=<SumBackward1>)
77
+
78
+ texts = ["I love cat!", "I love dog!", "I dislike cat!"]
79
+ instruction = "What is emotion expressed here?"
80
+ embeddings = inbedder.encode(texts, instruction, 3)
81
+
82
+ cosine_similarity(embeddings[:1], embeddings[1:], dim=1)
83
+ # tensor([0.9859, 0.8537], grad_fn=<SumBackward1>)
84
+ ```