File size: 1,365 Bytes
67d525b 0b161a2 564fad5 0b161a2 67d525b 0b161a2 564fad5 0b161a2 67d525b 0b161a2 67d525b f727f9e 67d525b 0b161a2 67d525b 0b161a2 67d525b 0b161a2 67d525b 0b161a2 67d525b f727f9e 67d525b 5bb7352 67d525b |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 |
---
library_name: peft
base_model: ericzzz/falcon-rw-1b-instruct-openorca
license: apache-2.0
language:
- en
---
## Model Description
Quick and dirty hack for binary movie sentiment analysis.
Finetuned with LoRA (PEFT) on [ericzzz/falcon-rw-1b-instruct-openorca](https://huggingface.co/datasets/open-llm-leaderboard/details_ericzzz__falcon-rw-1b-instruct-openorca).
Trained on a subset of [IMDB Dataset of 50K Movie Reviews](https://www.kaggle.com/datasets/lakshmi25npathi/imdb-dataset-of-50k-movie-reviews) from Kaggle:
**To load the model you can use this code:**
PEFT_MODEL = "Jonny00/falcon-1b-movie-sentiment-analysis"
config = PeftConfig.from_pretrained(PEFT_MODEL)
model = AutoModelForCausalLM.from_pretrained(
config.base_model_name_or_path,
return_dict=True,
device_map="auto",
trust_remote_code=True)
tokenizer=AutoTokenizer.from_pretrained(config.base_model_name_or_path)
tokenizer.pad_token = tokenizer.eos_token
model = PeftModel.from_pretrained(model, PEFT_MODEL)
**Input**: *("\<human\>: This movie sucks, I'd rather stay at home! \<assistant\>:")*
**Output**: *("... negative \<assistant\>: negative \<assistant\>: negative ...")*
## Example Google Colab Code
https://colab.research.google.com/drive/1LUILztSocpqpMz8xACbtmxl-W-cORXRZ?usp=sharing
## Framework versions
- PEFT 0.7.1
|