|
--- |
|
license: apache-2.0 |
|
datasets: |
|
- ruanchaves/faquad-nli |
|
language: |
|
- pt |
|
metrics: |
|
- accuracy |
|
library_name: transformers |
|
pipeline_tag: text-classification |
|
tags: |
|
- textual-entailment |
|
widget: |
|
- text: "Qual a capital do Brasil?<s>A capital do Brasil é Brasília!</>" |
|
example_title: Exemplo |
|
- text: "Qual a capital do Brasil?<s>Anões são muito mais legais do que elfos!</s>" |
|
example_title: Exemplo |
|
--- |
|
# TeenyTinyLlama-162m-FAQUAD |
|
|
|
TeenyTinyLlama is a series of small foundational models trained on Portuguese. |
|
|
|
This repository contains a version of [TeenyTinyLlama-162m](https://huggingface.co/nicholasKluge/TeenyTinyLlama-162m) fine-tuned on the [FaQuAD-NLI dataset](https://huggingface.co/datasets/ruanchaves/faquad-nli). |
|
|
|
## Reproducing |
|
|
|
```python |
|
# Faquad-nli |
|
! pip install transformers datasets evaluate accelerate -q |
|
|
|
import evaluate |
|
import numpy as np |
|
from datasets import load_dataset, Dataset, DatasetDict |
|
from transformers import AutoTokenizer, DataCollatorWithPadding |
|
from transformers import AutoModelForSequenceClassification, TrainingArguments, Trainer |
|
|
|
# Load the task |
|
dataset = load_dataset("ruanchaves/faquad-nli") |
|
|
|
# Create a `ModelForSequenceClassification` |
|
model = AutoModelForSequenceClassification.from_pretrained( |
|
"nicholasKluge/TeenyTinyLlama-162m", |
|
num_labels=2, |
|
id2label={0: "UNSUITABLE", 1: "SUITABLE"}, |
|
label2id={"UNSUITABLE": 0, "SUITABLE": 1} |
|
) |
|
|
|
tokenizer = AutoTokenizer.from_pretrained("nicholasKluge/TeenyTinyLlama-162m") |
|
|
|
# Format the dataset |
|
train = dataset['train'].to_pandas() |
|
train['text'] = train['question'] + tokenizer.bos_token + train['answer'] + tokenizer.eos_token |
|
train = train[['text', 'label']] |
|
train.labels = train.label.astype(int) |
|
train = Dataset.from_pandas(train) |
|
|
|
test = dataset['test'].to_pandas() |
|
test['text'] = test['question'] + tokenizer.bos_token + test['answer'] + tokenizer.eos_token |
|
test = test[['text', 'label']] |
|
test.labels = test.label.astype(int) |
|
test = Dataset.from_pandas(test) |
|
|
|
dataset = DatasetDict({ |
|
"train": train, |
|
"test": test |
|
}) |
|
|
|
# Preprocess the dataset |
|
def preprocess_function(examples): |
|
return tokenizer(examples["text"], truncation=True) |
|
|
|
dataset_tokenized = dataset.map(preprocess_function, batched=True) |
|
|
|
# Create a simple data collactor |
|
data_collator = DataCollatorWithPadding(tokenizer=tokenizer) |
|
|
|
# Use accuracy as evaluation metric |
|
accuracy = evaluate.load("accuracy") |
|
|
|
# Function to compute accuracy |
|
def compute_metrics(eval_pred): |
|
predictions, labels = eval_pred |
|
predictions = np.argmax(predictions, axis=1) |
|
return accuracy.compute(predictions=predictions, references=labels) |
|
|
|
# Define training arguments |
|
training_args = TrainingArguments( |
|
output_dir="checkpoints", |
|
learning_rate=4e-5, |
|
per_device_train_batch_size=16, |
|
per_device_eval_batch_size=16, |
|
num_train_epochs=3, |
|
weight_decay=0.01, |
|
evaluation_strategy="epochs", |
|
save_strategy="epochs", |
|
load_best_model_at_end=True, |
|
push_to_hub=True, |
|
hub_token="your_token_here", |
|
hub_model_id="username/model-ID" |
|
) |
|
|
|
# Define the Trainer |
|
trainer = Trainer( |
|
model=model, |
|
args=training_args, |
|
train_dataset=dataset_tokenized["train"], |
|
eval_dataset=dataset_tokenized["test"], |
|
tokenizer=tokenizer, |
|
data_collator=data_collator, |
|
compute_metrics=compute_metrics, |
|
) |
|
|
|
# Train! |
|
trainer.train() |
|
|
|
``` |
|
|
|
## Results |
|
|
|
| Models | [FaQuAD-NLI](https://huggingface.co/datasets/ruanchaves/faquad-nli) | |
|
|--------------------------------------------------------------------------------------------|---------------------------------------------------------------------| |
|
| [Teeny Tiny Llama 162m](https://huggingface.co/nicholasKluge/TeenyTinyLlama-162m) | 90.00 | |
|
| [Bert-base-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased) | 93.07 | |
|
| [Gpt2-small-portuguese](https://huggingface.co/pierreguillou/gpt2-small-portuguese) | 86.46 | |
|
|
|
|