---
language: ja
license: mit
widget:
- text: "自然言語処理が面白い"
metrics:
- accuracy
- f1
---
# bert-japanese_finetuned-sentiment-analysis
This model was trained from scratch on the Japanese Sentiment Polarity Dictionary dataset.
## Pre-trained model
jarvisx17/japanese-sentiment-analysis
Link : https://huggingface.co/jarvisx17/japanese-sentiment-analysis
## Training Data
The model was trained on Japanese Sentiment Polarity Dictionary dataset.
link : https://www.cl.ecei.tohoku.ac.jp/Open_Resources-Japanese_Sentiment_Polarity_Dictionary.html
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
## Usage
You can use cURL to access this model:
Python API:
```
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("minutillamolinara/bert-japanese_finetuned-sentiment-analysis")
model = AutoModelForSequenceClassification.from_pretrained("minutillamolinara/bert-japanese_finetuned-sentiment-analysis")
inputs = tokenizer("自然言語処理が面白い", return_tensors="pt")
outputs = model(**inputs)
```
### Dependencies
- !pip install fugashi
- !pip install unidic_lite
## Licenses
MIT