Jonny00 commited on
Commit
67d525b
·
1 Parent(s): f727f9e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +23 -12
README.md CHANGED
@@ -1,3 +1,4 @@
 
1
  ---
2
  library_name: peft
3
  base_model: ericzzz/falcon-rw-1b-instruct-openorca
@@ -6,28 +7,38 @@ language:
6
  - en
7
  ---
8
 
9
- # Model Info
10
 
11
  Quick and dirty hack for binary movie sentiment analysis.
12
 
13
- Finetuned with LoRA on ericzzz/falcon-rw-1b-instruct-openorca.
14
-
15
- https://huggingface.co/datasets/open-llm-leaderboard/details_ericzzz__falcon-rw-1b-instruct-openorca
16
 
17
- Trained on a subset of "IMDB Dataset of 50K Movie Reviews" from Kaggle:
18
 
19
- https://www.kaggle.com/datasets/lakshmi25npathi/imdb-dataset-of-50k-movie-reviews
20
 
21
- ### Input / Output
 
 
 
 
 
 
 
 
 
 
 
 
22
 
23
- Input: String of a movie review (e.g. "\<human\>: This movie sucks, I'd rather stay at home! \<assistant\>:")
24
 
25
- Output: String containing the sentiment (e.g. "... negative \<assistant\>: negative \<assistant\>: negative ...")
26
 
27
- ### Example Google Colab Code
28
 
29
  https://colab.research.google.com/drive/1LUILztSocpqpMz8xACbtmxl-W-cORXRZ?usp=sharing
30
 
31
- ### Framework versions
32
 
33
- - PEFT 0.7.1
 
1
+
2
  ---
3
  library_name: peft
4
  base_model: ericzzz/falcon-rw-1b-instruct-openorca
 
7
  - en
8
  ---
9
 
10
+ ## Model Description
11
 
12
  Quick and dirty hack for binary movie sentiment analysis.
13
 
14
+ Finetuned with LoRA (PEFT) on [ericzzz/falcon-rw-1b-instruct-openorca](https://huggingface.co/datasets/open-llm-leaderboard/details_ericzzz__falcon-rw-1b-instruct-openorca).
 
 
15
 
16
+ Trained on a subset of [IMDB Dataset of 50K Movie Reviews](https://www.kaggle.com/datasets/lakshmi25npathi/imdb-dataset-of-50k-movie-reviews) from Kaggle:
17
 
18
+ **To load the model you can use this code:**
19
 
20
+ PEFT_MODEL = "Jonny00/falcon-1b-movie-sentiment-analysis"
21
+
22
+ config = PeftConfig.from_pretrained(PEFT_MODEL)
23
+ model = AutoModelForCausalLM.from_pretrained(
24
+ config.base_model_name_or_path,
25
+ return_dict=True,
26
+ device_map="auto",
27
+ trust_remote_code=True)
28
+
29
+ tokenizer=AutoTokenizer.from_pretrained(config.base_model_name_or_path)
30
+ tokenizer.pad_token = tokenizer.eos_token
31
+
32
+ model = PeftModel.from_pretrained(model, PEFT_MODEL)
33
 
34
+ **Input**: *("\<human\>: This movie sucks, I'd rather stay at home! \<assistant\>:")*
35
 
36
+ **Output**: *("... negative \<assistant\>: negative \<assistant\>: negative ...")*
37
 
38
+ ## Example Google Colab Code
39
 
40
  https://colab.research.google.com/drive/1LUILztSocpqpMz8xACbtmxl-W-cORXRZ?usp=sharing
41
 
42
+ ## Framework versions
43
 
44
+ - PEFT 0.7.1