# Model Card for BERT-base Sentiment Analysis Model | |
## Model Details | |
This model is a fine-tuned version of BERT-base for sentiment analysis tasks. | |
## Training Data | |
The model was trained on the Rotten Tomatoes dataset. | |
## Training Procedure | |
- **Learning Rate**: 2e-5 | |
- **Epochs**: 3 | |
- **Batch Size**: 16 | |
## How to Use | |
```python | |
from transformers import AutoTokenizer, AutoModelForSequenceClassification | |
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") | |
model = AutoModelForSequenceClassification.from_pretrained("bert-base-uncased") | |
input_text = "The movie was fantastic with a gripping storyline!" | |
inputs = tokenizer.encode(input_text, return_tensors="pt") | |
outputs = model(inputs) | |
print(outputs.logits) | |
``` | |
## Evaluation | |
- **Accuracy**: 81.97% | |
## Limitations | |
The model may generate biased or inappropriate content | |
due to the nature of the training data. | |
It is recommended to use the model with caution and apply necessary filters. | |
## Ethical Considerations | |
- **Bias**: The model may inherit biases present in the training data. | |
- **Misuse**: The model can be misused to generate misleading or harmful content. | |
## Copyright and License | |
This model is licensed under the MIT License. | |