Model Card for Model ID

This model fine-tunes GPT-2 on the "Tweet Sentiment Extraction" dataset for sentiment analysis tasks.

Model Details

Model Description

This model fine-tunes GPT-2 using the "Tweet Sentiment Extraction" dataset to extract sentiment-relevant portions of text. It demonstrates preprocessing, tokenization, and fine-tuning with Hugging Face libraries.

Uses

Direct Use

This model can be used to analyze text for sentiment-relevant extractions directly after fine-tuning. It works as a baseline model for learning sentiment-specific features.

Downstream Use [optional]

Fine-tuned for tasks that involve sentiment analysis, such as social media monitoring or customer feedback analysis.

Out-of-Scope Use

Avoid using the model for real-time sentiment prediction or deployment without additional training/testing for specific use cases.

Bias, Risks, and Limitations

The dataset used may not fully represent the diversity of text, leading to biases in the output. There is a risk of overfitting to the specific dataset.

Recommendations

Carefully evaluate the model for biases and limitations before deploying in production environments. Consider retraining on a more diverse dataset if required.

How to Get Started with the Model

Use the code below to get started with the model.

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("https://huggingface.co/Wexnflex/Tweet_Sentiment")
tokenizer = AutoTokenizer.from_pretrained("https://huggingface.co/Wexnflex/Tweet_Sentiment")

text = "Input your text here."
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0]))



#### Training Hyperparameters

Training Hyperparameters
Batch size: 16
Learning rate: 2e-5
Epochs: 3
Optimizer: AdamW

#### Testing Data, Factors & Metrics

#### Testing Data

The evaluation was performed on the test split of the "Tweet Sentiment Extraction" dataset.


#### Factors

Evaluation is segmented by sentiment labels (e.g., positive, negative, neutral).


#### Metrics

Accuracy 

### Results

70% Accuracy 
#### Summary

The fine-tuned model performs well for extracting sentiment-relevant text, with room for improvement in handling ambiguous cases.


## Technical Specifications [optional]


#### Hardware

T4 GPU (Google Colab)
#### Software

Hugging Face Transformers Library
Downloads last month
88
Safetensors
Model size
124M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for Wexnflex/Tweet_Sentiment

Finetuned
(1350)
this model

Dataset used to train Wexnflex/Tweet_Sentiment