tolga-ozturk
commited on
Commit
·
2d3dde1
1
Parent(s):
b61eb86
Upload README.md
Browse files
README.md
ADDED
@@ -0,0 +1,85 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language:
|
3 |
+
- fr
|
4 |
+
tags:
|
5 |
+
- nsp
|
6 |
+
- next-sentence-prediction
|
7 |
+
- t5
|
8 |
+
datasets:
|
9 |
+
- wikipedia
|
10 |
+
metrics:
|
11 |
+
- accuracy
|
12 |
+
---
|
13 |
+
|
14 |
+
# T5-french-nsp
|
15 |
+
|
16 |
+
T5-french-nsp is fine-tuned for Next Sentence Prediction task on the [wikipedia dataset](https://huggingface.co/datasets/wikipedia) using [plguillou/t5-base-fr-sum-cnndm](https://huggingface.co/plguillou/t5-base-fr-sum-cnndm) model. It was introduced in this [paper](https://arxiv.org/abs/2307.07331) and first released on this page.
|
17 |
+
|
18 |
+
## Model description
|
19 |
+
|
20 |
+
T5-french-nsp is a Transformer-based model which was fine-tuned for Next Sentence Prediction task on 14000 French Wikipedia articles.
|
21 |
+
|
22 |
+
## Intended uses
|
23 |
+
|
24 |
+
- Apply Next Sentence Prediction tasks. (compare the results with BERT models since BERT natively supports this task)
|
25 |
+
- See how to fine-tune a T5 model using our [code](https://github.com/slds-lmu/stereotypes-multi/tree/main)
|
26 |
+
- Check our [paper](https://arxiv.org/abs/2307.07331) to see its results
|
27 |
+
|
28 |
+
## How to use
|
29 |
+
|
30 |
+
You can use this model directly with a pipeline for next sentence prediction. Here is how to use this model in PyTorch:
|
31 |
+
|
32 |
+
### Necessary Initialization
|
33 |
+
```python
|
34 |
+
import torch
|
35 |
+
from transformers import T5ForConditionalGeneration, T5Tokenizer
|
36 |
+
from huggingface_hub import hf_hub_download
|
37 |
+
|
38 |
+
class ModelNSP(torch.nn.Module):
|
39 |
+
def __init__(self, pretrained_model, tokenizer, nsp_dim=300):
|
40 |
+
super(ModelNSP, self).__init__()
|
41 |
+
self.zero_token, self.one_token = (self.find_label_encoding(x, tokenizer).item() for x in ["0", "1"])
|
42 |
+
self.core_model = T5ForConditionalGeneration.from_pretrained(pretrained_model)
|
43 |
+
self.nsp_head = torch.nn.Sequential(torch.nn.Linear(self.core_model.config.hidden_size, nsp_dim),
|
44 |
+
torch.nn.Linear(nsp_dim, nsp_dim), torch.nn.Linear(nsp_dim, 2))
|
45 |
+
|
46 |
+
def forward(self, input_ids, attention_mask=None):
|
47 |
+
outputs = self.core_model.generate(input_ids=input_ids, attention_mask=attention_mask, max_length=3,
|
48 |
+
output_scores=True, return_dict_in_generate=True)
|
49 |
+
logits = [torch.Tensor([score[self.zero_token], score[self.one_token]]) for score in outputs.scores[1]]
|
50 |
+
return torch.stack(logits).softmax(dim=-1)
|
51 |
+
|
52 |
+
@staticmethod
|
53 |
+
def find_label_encoding(input_str, tokenizer):
|
54 |
+
encoded_str = tokenizer.encode(input_str, add_special_tokens=False, return_tensors="pt")
|
55 |
+
return (torch.index_select(encoded_str, 1, torch.tensor([1])) if encoded_str.size(dim=1) == 2 else encoded_str)
|
56 |
+
|
57 |
+
tokenizer = T5Tokenizer.from_pretrained("tolga-ozturk/t5-french-nsp")
|
58 |
+
model = torch.nn.DataParallel(ModelNSP("plguillou/t5-base-fr-sum-cnndm", tokenizer).eval())
|
59 |
+
model.load_state_dict(torch.load(hf_hub_download(repo_id="tolga-ozturk/t5-french-nsp", filename="model_weights.bin")))
|
60 |
+
```
|
61 |
+
|
62 |
+
### Inference
|
63 |
+
```python
|
64 |
+
batch_texts = [("classification binaire: En Italie, la pizza est présentée non tranchée.", "Le ciel est bleu."),
|
65 |
+
("classification binaire: En Italie, la pizza est présentée non tranchée.", "Cependant, il est servi en tranches en Turquie.")]
|
66 |
+
encoded_dict = tokenizer.batch_encode_plus(batch_text_or_text_pairs=batch_texts, truncation="longest_first", padding=True, return_tensors="pt", return_attention_mask=True, max_length=256)
|
67 |
+
print(torch.argmax(model(encoded_dict.input_ids, attention_mask=encoded_dict.attention_mask), dim=-1))
|
68 |
+
```
|
69 |
+
|
70 |
+
### Training Metrics
|
71 |
+
<img src="https://huggingface.co/tolga-ozturk/t5-french-nsp/resolve/main/metrics.png">
|
72 |
+
|
73 |
+
## BibTeX entry and citation info
|
74 |
+
|
75 |
+
```bibtex
|
76 |
+
@misc{title={How Different Is Stereotypical Bias Across Languages?},
|
77 |
+
author={Ibrahim Tolga Öztürk and Rostislav Nedelchev and Christian Heumann and Esteban Garces Arias and Marius Roger and Bernd Bischl and Matthias Aßenmacher},
|
78 |
+
year={2023},
|
79 |
+
eprint={2307.07331},
|
80 |
+
archivePrefix={arXiv},
|
81 |
+
primaryClass={cs.CL}
|
82 |
+
}
|
83 |
+
```
|
84 |
+
|
85 |
+
The work is done with Ludwig-Maximilians-Universität Statistics group, don't forget to check out [their huggingface page](https://huggingface.co/misoda) for other interesting works!
|