|
--- |
|
language: |
|
- fr |
|
tags: |
|
- nsp |
|
- next-sentence-prediction |
|
- t5 |
|
datasets: |
|
- wikipedia |
|
metrics: |
|
- accuracy |
|
--- |
|
|
|
# T5-french-nsp |
|
|
|
T5-french-nsp is fine-tuned for Next Sentence Prediction task on the [wikipedia dataset](https://huggingface.co/datasets/wikipedia) using [plguillou/t5-base-fr-sum-cnndm](https://huggingface.co/plguillou/t5-base-fr-sum-cnndm) model. It was introduced in this [paper](https://arxiv.org/abs/2307.07331) and first released on this page. |
|
|
|
## Model description |
|
|
|
T5-french-nsp is a Transformer-based model which was fine-tuned for Next Sentence Prediction task on 14000 French Wikipedia articles. |
|
|
|
## Intended uses |
|
|
|
- Apply Next Sentence Prediction tasks. (compare the results with BERT models since BERT natively supports this task) |
|
- See how to fine-tune a T5 model using our [code](https://github.com/slds-lmu/stereotypes-multi/tree/main) |
|
- Check our [paper](https://arxiv.org/abs/2307.07331) to see its results |
|
|
|
## How to use |
|
|
|
You can use this model directly with a pipeline for next sentence prediction. Here is how to use this model in PyTorch: |
|
|
|
### Necessary Initialization |
|
```python |
|
import torch |
|
from transformers import T5ForConditionalGeneration, T5Tokenizer |
|
from huggingface_hub import hf_hub_download |
|
|
|
class ModelNSP(torch.nn.Module): |
|
def __init__(self, pretrained_model, tokenizer, nsp_dim=300): |
|
super(ModelNSP, self).__init__() |
|
self.zero_token, self.one_token = (self.find_label_encoding(x, tokenizer).item() for x in ["0", "1"]) |
|
self.core_model = T5ForConditionalGeneration.from_pretrained(pretrained_model) |
|
self.nsp_head = torch.nn.Sequential(torch.nn.Linear(self.core_model.config.hidden_size, nsp_dim), |
|
torch.nn.Linear(nsp_dim, nsp_dim), torch.nn.Linear(nsp_dim, 2)) |
|
|
|
def forward(self, input_ids, attention_mask=None): |
|
outputs = self.core_model.generate(input_ids=input_ids, attention_mask=attention_mask, max_length=3, |
|
output_scores=True, return_dict_in_generate=True) |
|
logits = [torch.Tensor([score[self.zero_token], score[self.one_token]]) for score in outputs.scores[1]] |
|
return torch.stack(logits).softmax(dim=-1) |
|
|
|
@staticmethod |
|
def find_label_encoding(input_str, tokenizer): |
|
encoded_str = tokenizer.encode(input_str, add_special_tokens=False, return_tensors="pt") |
|
return (torch.index_select(encoded_str, 1, torch.tensor([1])) if encoded_str.size(dim=1) == 2 else encoded_str) |
|
|
|
tokenizer = T5Tokenizer.from_pretrained("tolga-ozturk/t5-french-nsp") |
|
model = torch.nn.DataParallel(ModelNSP("plguillou/t5-base-fr-sum-cnndm", tokenizer).eval()) |
|
model.load_state_dict(torch.load(hf_hub_download(repo_id="tolga-ozturk/t5-french-nsp", filename="model_weights.bin"))) |
|
``` |
|
|
|
### Inference |
|
```python |
|
batch_texts = [("classification binaire: En Italie, la pizza est présentée non tranchée.", "Le ciel est bleu."), |
|
("classification binaire: En Italie, la pizza est présentée non tranchée.", "Cependant, il est servi en tranches en Turquie.")] |
|
encoded_dict = tokenizer.batch_encode_plus(batch_text_or_text_pairs=batch_texts, truncation="longest_first", padding=True, return_tensors="pt", return_attention_mask=True, max_length=256) |
|
print(torch.argmax(model(encoded_dict.input_ids, attention_mask=encoded_dict.attention_mask), dim=-1)) |
|
``` |
|
|
|
### Training Metrics |
|
<img src="https://huggingface.co/tolga-ozturk/t5-french-nsp/resolve/main/metrics.png"> |
|
|
|
## BibTeX entry and citation info |
|
|
|
```bibtex |
|
@misc{title={How Different Is Stereotypical Bias Across Languages?}, |
|
author={Ibrahim Tolga Öztürk and Rostislav Nedelchev and Christian Heumann and Esteban Garces Arias and Marius Roger and Bernd Bischl and Matthias Aßenmacher}, |
|
year={2023}, |
|
eprint={2307.07331}, |
|
archivePrefix={arXiv}, |
|
primaryClass={cs.CL} |
|
} |
|
``` |
|
|
|
The work is done with Ludwig-Maximilians-Universität Statistics group, don't forget to check out [their huggingface page](https://huggingface.co/misoda) for other interesting works! |