Can't get the same output as the test guidence of code
i use the code as following:
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("/mnt/workspace/huangqihao/数据质量提升/python打分/python-edu-scorer/")
model = AutoModelForSequenceClassification.from_pretrained("/mnt/workspace/huangqihao/数据质量提升/python打分/python-edu-scorer/")
text = "This is a test sentence."
inputs = tokenizer(text, return_tensors="pt", padding="longest", truncation=True)
outputs = model(**inputs)
logits = outputs.logits.squeeze(-1).float().detach().numpy()
score = logits.item()
result = {
"text": text,
"score": score,
"int_score": int(round(max(0, min(score, 5)))),
}
print(result)
its output is:
{'text': 'This is a test sentence.', 'score': 1.631742000579834, 'int_score': 2}
but the real output should be:
{'text': 'This is a test sentence.', 'score': 0.07964489609003067, 'int_score': 0}
i don't know why it happened, and i try to download the safetensors file for several times, but it still have the problem.