File size: 2,734 Bytes
815973d
 
19f6d87
3934949
 
 
815973d
 
19f6d87
815973d
 
 
19f6d87
815973d
 
 
 
 
19f6d87
 
815973d
 
 
19f6d87
815973d
 
 
 
19f6d87
 
815973d
19f6d87
815973d
 
 
0fd1aeb
19f6d87
0fd1aeb
 
19f6d87
0fd1aeb
 
19f6d87
 
0fd1aeb
19f6d87
 
 
0fd1aeb
19f6d87
0fd1aeb
19f6d87
 
 
0fd1aeb
 
19f6d87
0fd1aeb
815973d
 
19f6d87
815973d
 
 
19f6d87
522f232
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
---
license: mit
license_link: https://choosealicense.com/licenses/mit/
base_model:
- databricks/dolly-v2-3b
base_model_relation: quantized
---
# dolly-v2-3b-int8-ov
* Model creator: [Databricks](https://huggingface.co/databricks)
 * Original model: [dolly-v2-3b](https://huggingface.co/databricks/dolly-v2-3b)

## Description
This is [dolly-v2-3b](https://huggingface.co/databricks/dolly-v2-3b) model converted to the [OpenVINO™ IR](https://docs.openvino.ai/2024/documentation/openvino-ir-format.html) (Intermediate Representation) format with weights compressed to INT8 by [NNCF](https://github.com/openvinotoolkit/nncf).

## Quantization Parameters

Weight compression was performed using `nncf.compress_weights` with the following parameters:

* mode: **int8_asym**
* ratio: **1**

For more information on quantization, check the [OpenVINO model optimization guide](https://docs.openvino.ai/2024/openvino-workflow/model-optimization-guide/weight-compression.html).


## Compatibility

The provided OpenVINO™ IR model is compatible with:

* OpenVINO version 2024.4.0 and higher
* Optimum Intel 1.20.0 and higher

## Running Model Inference

1. Install packages required for using [Optimum Intel](https://huggingface.co/docs/optimum/intel/index) integration with the OpenVINO backend:

```
pip install optimum[openvino]
```

2. Run model inference:

```
from transformers import AutoTokenizer
from optimum.intel.openvino import OVModelForCausalLM

model_id = "OpenVINO/dolly-v2-3b-int8-ov"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = OVModelForCausalLM.from_pretrained(model_id)

inputs = tokenizer("What is OpenVINO?", return_tensors="pt")

outputs = model.generate(**inputs, max_length=200)
text = tokenizer.batch_decode(outputs)[0]
print(text)
```

For more examples and possible optimizations, refer to the [OpenVINO Large Language Model Inference Guide](https://docs.openvino.ai/2024/learn-openvino/llm_inference_guide.html).

## Limitations

Check the original model card for [original model card](https://huggingface.co/databricks/dolly-v2-3b) for limitations.

## Legal information

The original model is distributed under [mit](https://choosealicense.com/licenses/mit/) license. More details can be found in [original model card](https://huggingface.co/databricks/dolly-v2-3b).

## Disclaimer

Intel is committed to respecting human rights and avoiding causing or contributing to adverse impacts on human rights. See [Intel’s Global Human Rights Principles](https://www.intel.com/content/dam/www/central-libraries/us/en/documents/policy-human-rights.pdf). Intel’s products and software are intended only to be used in applications that do not cause or contribute to adverse impacts on human rights.