Some example code to load our model locally and generate a prediction:
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("Dede1989600/hatespeech_mCAD")
model = AutoModelForSeq2SeqLM.from_pretrained("Dede1989600/hatespeech_mCAD")
inputs = tokenizer("This is a test that should be labeled as no hate", return_tensors="pt")
outputs = model.generate(**inputs)
tokenizer.decode(outputs[0], skip_special_tokens=True)
The model returns either 1 for hatespeech or 0 for no hatespeech
- Downloads last month
- 3
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.