Update README.md
Browse files
README.md
CHANGED
@@ -10,24 +10,10 @@ language:
|
|
10 |
## Model Details
|
11 |
|
12 |
This model is an int4 model with group_size 128 of [upstage/SOLAR-10.7B-Instruct-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0) generated by [intel/auto-round](https://github.com/intel/auto-round).
|
|
|
13 |
|
14 |
|
15 |
|
16 |
-
### INT4 Inference with AutoGPTQ's Kernel
|
17 |
-
|
18 |
-
```python
|
19 |
-
##pip install auto-gptq[triton]
|
20 |
-
##pip install triton==2.2.0
|
21 |
-
from transformers import AutoModelForCausalLM, AutoTokenizer
|
22 |
-
quantized_model_dir = "Intel/SOLAR-10.7B-Instruct-v1.0-int4-inc"
|
23 |
-
model = AutoModelForCausalLM.from_pretrained(quantized_model_dir,
|
24 |
-
device_map="auto",
|
25 |
-
trust_remote_code=False,
|
26 |
-
)
|
27 |
-
tokenizer = AutoTokenizer.from_pretrained(quantized_model_dir, use_fast=True)
|
28 |
-
print(tokenizer.decode(model.generate(**tokenizer("There is a girl who likes adventure,", return_tensors="pt").to(model.device),max_new_tokens=50)[0]))
|
29 |
-
```
|
30 |
-
|
31 |
|
32 |
|
33 |
### Evaluate the model
|
|
|
10 |
## Model Details
|
11 |
|
12 |
This model is an int4 model with group_size 128 of [upstage/SOLAR-10.7B-Instruct-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0) generated by [intel/auto-round](https://github.com/intel/auto-round).
|
13 |
+
Inference of this model is compatible with AutoGPTQ's Kernel.
|
14 |
|
15 |
|
16 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
17 |
|
18 |
|
19 |
### Evaluate the model
|