|
|
|
--- |
|
library_name: peft |
|
base_model: ericzzz/falcon-rw-1b-instruct-openorca |
|
license: apache-2.0 |
|
language: |
|
- en |
|
--- |
|
|
|
## Model Description |
|
|
|
Quick and dirty hack for binary movie sentiment analysis. |
|
|
|
Finetuned with LoRA (PEFT) on [ericzzz/falcon-rw-1b-instruct-openorca](https://huggingface.co/datasets/open-llm-leaderboard/details_ericzzz__falcon-rw-1b-instruct-openorca). |
|
|
|
Trained on a subset of [IMDB Dataset of 50K Movie Reviews](https://www.kaggle.com/datasets/lakshmi25npathi/imdb-dataset-of-50k-movie-reviews) from Kaggle: |
|
|
|
**To load the model you can use this code:** |
|
|
|
PEFT_MODEL = "Jonny00/falcon-1b-movie-sentiment-analysis" |
|
|
|
config = PeftConfig.from_pretrained(PEFT_MODEL) |
|
model = AutoModelForCausalLM.from_pretrained( |
|
config.base_model_name_or_path, |
|
return_dict=True, |
|
device_map="auto", |
|
trust_remote_code=True) |
|
|
|
tokenizer=AutoTokenizer.from_pretrained(config.base_model_name_or_path) |
|
tokenizer.pad_token = tokenizer.eos_token |
|
|
|
model = PeftModel.from_pretrained(model, PEFT_MODEL) |
|
|
|
**Input**: *("\<human\>: This movie sucks, I'd rather stay at home! \<assistant\>:")* |
|
|
|
**Output**: *("... negative \<assistant\>: negative \<assistant\>: negative ...")* |
|
|
|
## Example Google Colab Code |
|
|
|
https://colab.research.google.com/drive/1LUILztSocpqpMz8xACbtmxl-W-cORXRZ?usp=sharing |
|
|
|
## Framework versions |
|
|
|
- PEFT 0.7.1 |
|
|