File size: 8,968 Bytes
0c05727 380843b 3daad1f b18d838 3daad1f 0c05727 75dacfc 0c05727 380843b 0c05727 60bc5a8 0c05727 f0c919c 0c05727 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 |
---
datasets:
- NeelNanda/pile-10k
base_model:
- deepseek-ai/DeepSeek-V2.5-1210
---
## Model Details
This model is an int4 model with group_size 128 and and symmetric quantization of [deepseek-ai/DeepSeek-V2.5-1210](https://huggingface.co/deepseek-ai/DeepSeek-V2.5-1210) generated by [intel/auto-round](https://github.com/intel/auto-round) algorithm. Load the model with `revision="6d3d2cf"` to use AutoGPTQ format. **Please note that loading the model in Transformers can be quite slow. Consider using an alternative serving framework for better performance.**
For other serving frameworks, the autogptq format is required. You can run the following command to fetch the model:
```bash
huggingface-cli download OPEA/DeepSeek-V2.5-1210-int4-sym-inc --revision 6d3d2cf
```
Please follow the license of the origin model.
## How To Use
### INT4 Inference(CPU/CUDA)
````python
from auto_round import AutoRoundConfig ##must import for auto-round format
from transformers import AutoModelForCausalLM, AutoTokenizer,GenerationConfig
import torch
quantized_model_dir="OPEA/DeepSeek-V2.5-1210-int4-sym-inc"
max_memory = {i: "75GB" for i in range(2)}
model = AutoModelForCausalLM.from_pretrained(
quantized_model_dir,
torch_dtype=torch.float16,
device_map="sequential",
attn_implementation="eager",
trust_remote_code=True,
max_memory=max_memory,
##revision="6d3d2cf" ##AutoGPTQ format
)
model.generation_config = GenerationConfig.from_pretrained(model_name)
model.generation_config.pad_token_id = model.generation_config.eos_token_id
tokenizer = AutoTokenizer.from_pretrained(quantized_model_dir,trust_remote_code=True)
prompt = "There is a girl who likes adventure,"
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
model_inputs.input_ids,
max_new_tokens=512, ##change this to align with the official usage
do_sample=False ##change this to align with the official usage
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
prompt = "strawberry中有几个r?"
##INT4
"""### 第一步:理解问题
首先,我需要明确问题的含义。问题是:“strawberry中有几个r?” 这里的“strawberry”指的是一种水果,而“r”则是英文字母。问题实际上是在问,单词“strawberry”中包含了多少个字母“r”。
### 第二步:分析单词结构
为了回答这个问题,我需要仔细分析单词“strawberry”的结构,找出其中所有的字母“r”。
单词“strawberry”拼写如下:
```
s t r a w b e r r y
```
### 第三步:逐个字母检查
接下来,我将逐个字母检查,看看哪些字母是“r”。
1. **s** - 不是“r”
2. **t** - 不是“r”
3. **r** - 是“r”
4. **a** - 不是“r”
5. **w** - 不是“r”
6. **b** - 不是“r”
7. **e** - 不是“r”
8. **r** - 是“r”
9. **r** - 是“r”
10. **y** - 不是“r”
### 第四步:统计“r”的数量
通过上述检查,我发现单词“strawberry”中共有三个字母“r”。
### 第五步:验证结果
为了确保我的答案正确,我再次检查了单词的拼写,并重新数了一遍“r”的数量,确认确实有三个“r”。
### 最终答案
综上所述,单词“strawberry”中共有**三个**字母“r”。"""
prompt = "9.11和9.8哪个数字大"
##INT4
"""要比较 **9.11** 和 **9.8** 的大小,可以按照以下步骤进行:
1. **比较整数部分**:
- 两个数的整数部分都是 **9**,因此需要比较小数部分。
2. **比较小数部分**:
- **9.11** 的小数部分是 **0.11**
- **9.8** 的小数部分是 **0.8**
3. **比较小数部分的大小**:
- **0.8** 大于 **0.11**
4. **得出结论**:
- 由于小数部分 **0.8** 大于 **0.11**,所以 **9.8** 大于 **9.11**。
最终答案是:
\[
\boxed{9.8}
\]"""
prompt = "Please give a brief introduction of DeepSeek company."
##INT4:"""DeepSeek Artificial Intelligence Co., Ltd. (referred to as "DeepSeek" or "深度求索") , founded in 2023, is a Chinese company dedicated to making AGI a reality."""
prompt = "There is a girl who likes adventure,"
##INT4:
"""It sounds like you're setting the stage for a story or a character introduction! Here's a little continuation to spark your imagination:
---
There is a girl who likes adventure. Her name is Lily, and her eyes sparkle with curiosity whenever she hears the word "explore." Whether it's hiking through dense forests, diving into the mysteries of the ocean, or wandering through bustling city streets in search of hidden treasures, Lily is always ready for the next thrill.
Her backpack is never without a map, a compass, and a notebook where she scribbles down her discoveries. She believes that every adventure, no matter how small, holds a story waiting to be told. Her friends often joke that she has a sixth sense for finding the most exciting paths, but Lily knows it's just her unwavering determination to seek out the unknown.
One day, while exploring an old, abandoned library, Lily stumbles upon a dusty, leather-bound book. As she flips through its pages, she discovers a series of cryptic clues leading to a legendary treasure hidden deep within the mountains. Without hesitation, she packs her bag and sets off on her greatest adventure yet, ready to uncover the secrets that have eluded others for centuries.
---
Feel free to expand on this or let me know if you'd like to explore a different direction!"""
````
### Evaluate the model
pip3 install lm-eval==0.4.5
```bash
auto-round --model "OPEA/DeepSeek-V2.5-1210-int4-sym-inc" --eval --eval_bs 8 --tasks leaderboard_ifeval,leaderboard_mmlu_pro,gsm8k,lambada_openai,hellaswag,piqa,winogrande,truthfulqa_mc1,openbookqa,boolq,arc_easy,arc_challenge,cmmlu,ceval-valid --devices 0,1,2,3
```
| Metric | BF16 | INT4 |
| :----------------------------------------- | :--: | :----: |
| Avg | | |
| leaderboard_mmlu_pro 5 shots | | 0.521 |
| leaderboard_ifeval inst_level_strict_acc | | |
| leaderboard_ifeval prompt_level_strict_acc | | |
| mmlu | | 0.7690 |
| cmmlu | | |
| ceval-valid | | |
| gsm8k 5 shots | | |
| lambada_openai | | |
| hellaswag | | |
| winogrande | | |
| piqa | | |
| truthfulqa_mc1 | | |
| openbookqa | | |
| boolq | | |
| arc_easy | | |
| arc_challenge | | |
### Generate the model
Here is the sample command to generate the model.
```bash
auto-round \
--model deepseek-ai/DeepSeek-V2.5-1210 \
--device 0 \
--disable_eval \
--format 'auto_gptq,auto_round' \
--output_dir "./tmp_autoround"
```
## Ethical Considerations and Limitations
The model can produce factually incorrect output, and should not be relied on to produce factually accurate information. Because of the limitations of the pretrained model and the finetuning datasets, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
Therefore, before deploying any applications of the model, developers should perform safety testing.
## Caveats and Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
Here are a couple of useful links to learn more about Intel's AI software:
- Intel Neural Compressor [link](https://github.com/intel/neural-compressor)
## Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes.
## Cite
@article{cheng2023optimize, title={Optimize weight rounding via signed gradient descent for the quantization of llms}, author={Cheng, Wenhua and Zhang, Weiwei and Shen, Haihao and Cai, Yiyang and He, Xin and Lv, Kaokao and Liu, Yi}, journal={arXiv preprint arXiv:2309.05516}, year={2023} }
[arxiv](https://arxiv.org/abs/2309.05516) [github](https://github.com/intel/auto-round) |