English
yintongl commited on
Commit
f5d2d04
·
verified ·
1 Parent(s): 5eb5bd6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -14
README.md CHANGED
@@ -10,23 +10,10 @@ language:
10
  ## Model Details
11
 
12
  This model is an int4 model with group_size 128 of [EleutherAI/gpt-j-6b](https://huggingface.co/EleutherAI/gpt-j-6b) generated by [intel/auto-round](https://github.com/intel/auto-round).
 
13
 
14
 
15
 
16
- ### INT4 Inference with AutoGPTQ's Kernel
17
-
18
- ```python
19
- ##pip install auto-gptq[triton]
20
- ##pip install triton==2.2.0
21
- from transformers import AutoModelForCausalLM, AutoTokenizer
22
- quantized_model_dir = "Intel/gpt-j-6b-int4-inc"
23
- model = AutoModelForCausalLM.from_pretrained(quantized_model_dir,
24
- device_map="auto",
25
- trust_remote_code=False,
26
- )
27
- tokenizer = AutoTokenizer.from_pretrained(quantized_model_dir, use_fast=True)
28
- print(tokenizer.decode(model.generate(**tokenizer("There is a girl who likes adventure,", return_tensors="pt").to(model.device),max_new_tokens=50)[0]))
29
- ```
30
 
31
 
32
 
 
10
  ## Model Details
11
 
12
  This model is an int4 model with group_size 128 of [EleutherAI/gpt-j-6b](https://huggingface.co/EleutherAI/gpt-j-6b) generated by [intel/auto-round](https://github.com/intel/auto-round).
13
+ Inference of this model is compatible with AutoGPTQ's Kernel.
14
 
15
 
16
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
17
 
18
 
19