mohamedemam commited on
Commit
307bd57
·
verified ·
1 Parent(s): 119ac8a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -8
README.md CHANGED
@@ -14,19 +14,13 @@ datasets:
14
  - mohamedemam/Essay-quetions-auto-grading
15
  ---
16
 
17
- # Model Card for Model ID
18
 
19
- <!-- Provide a quick summary of what the model is/does. -->
20
-
21
-
22
-
23
- ## Model Details
24
 
25
  ### Model Description
26
 
27
  <!-- Provide a longer summary of what this model is. -->
28
 
29
- We are thrilled to introduce our graduation project, the EM5 model, designed for automated essay grading in both Arabic and English. 📝✨
30
 
31
  To develop this model, we first created a custom dataset for training. We adapted the QuAC and OpenOrca datasets to make them suitable for our automated essay grading application.
32
 
@@ -134,11 +128,12 @@ answer="""When choosing a cloud service provider for deploying a large language
134
 
135
  By evaluating these factors, you can select a cloud service provider that aligns with your deployment needs, ensuring efficient and cost-effective operation of your large language model."""
136
  from peft import PeftModel, PeftConfig
137
- from transformers import AutoModelForCausalLM
138
 
139
  config = PeftConfig.from_pretrained("mohamedemam/Em2-llama-7b")
140
  base_model = AutoModelForCausalLM.from_pretrained("NousResearch/Llama-2-7b-hf")
141
  model = PeftModel.from_pretrained(base_model, "mohamedemam/Em2-llama-7b")
 
142
  pipe=MyPipeline(model,tokenizer)
143
  print(pipe(context,quetion,answer,generate=True,max_new_tokens=4, num_beams=2, do_sample=False,num_return_sequences=1))
144
  ```
 
14
  - mohamedemam/Essay-quetions-auto-grading
15
  ---
16
 
 
17
 
 
 
 
 
 
18
 
19
  ### Model Description
20
 
21
  <!-- Provide a longer summary of what this model is. -->
22
 
23
+ We are thrilled to introduce our graduation project, the EM2 model, designed for automated essay grading in both Arabic and English. 📝✨
24
 
25
  To develop this model, we first created a custom dataset for training. We adapted the QuAC and OpenOrca datasets to make them suitable for our automated essay grading application.
26
 
 
128
 
129
  By evaluating these factors, you can select a cloud service provider that aligns with your deployment needs, ensuring efficient and cost-effective operation of your large language model."""
130
  from peft import PeftModel, PeftConfig
131
+ from transformers import AutoModelForCausalLM,AutoTokenizer
132
 
133
  config = PeftConfig.from_pretrained("mohamedemam/Em2-llama-7b")
134
  base_model = AutoModelForCausalLM.from_pretrained("NousResearch/Llama-2-7b-hf")
135
  model = PeftModel.from_pretrained(base_model, "mohamedemam/Em2-llama-7b")
136
+ tokenizer = AutoTokenizer.from_pretrained("mohamedemam/Em2-llama-7b", trust_remote_code=True)
137
  pipe=MyPipeline(model,tokenizer)
138
  print(pipe(context,quetion,answer,generate=True,max_new_tokens=4, num_beams=2, do_sample=False,num_return_sequences=1))
139
  ```