|
|
|
# Model Card for BERT-base Sentiment Analysis Model |
|
|
|
## Model Details |
|
This model is a fine-tuned version of BERT-base for sentiment analysis tasks. |
|
|
|
## Training Data |
|
The model was trained on the Rotten Tomatoes dataset. |
|
|
|
## Training Procedure |
|
- **Learning Rate**: 2e-5 |
|
- **Epochs**: 3 |
|
- **Batch Size**: 16 |
|
๊ฐ๋ฅํ๋ฉด ๋ค๋ฅธ ์ฌ๋๋ค๋ ๋๊ฐ์ ๋ฒํธ๋ชจ๋ธ, ๋กํผ ํ ๋ฉํ ๋ฅผ ์ด์ฉํ์ ๋ ์ฌํ๊ฐ๋ฅํ๋๋ก ํ๋ ๋ชจ๋ ํ์ดํผ ํ๋ผ๋ฏธํฐ๋ค์ ๋ค ์ ์ด๋ผ |
|
|
|
## How to Use ํ๊น
ํ์ด์ค ์ธ ๋ ์ด๋ค ๊ฒ์ ์ฐ๋ฉด ๋๋ค๋ ๊ฑธ ์๋ ค์ฃผ๋ ๊ฒ |
|
```python |
|
from transformers import AutoTokenizer, AutoModelForSequenceClassification |
|
|
|
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") |
|
model = AutoModelForSequenceClassification.from_pretrained("bert-base-uncased") |
|
|
|
input_text = "The movie was fantastic with a gripping storyline!" |
|
inputs = tokenizer.encode(input_text, return_tensors="pt") |
|
outputs = model(inputs) |
|
print(outputs.logits) |
|
``` |
|
|
|
## Evaluation ํ๊ฐ ๊ฒฐ๊ณผ |
|
- **Accuracy**: 81.97% |
|
|
|
## Limitations ์ฝ์ ์ ๋ญ๊ฐ ์๋ค๋ ๊ฒ |
|
The model may generate biased or inappropriate content |
|
due to the nature of the training data. |
|
It is recommended to use the model with caution and apply necessary filters. |
|
|
|
## Ethical Considerations |
|
- **Bias**: The model may inherit biases present in the training data. |
|
- **Misuse**: The model can be misused to generate misleading or harmful content. |
|
|
|
## Copyright and License |
|
This model is licensed under the MIT License. |
|
|