File size: 14,448 Bytes
db0e3e3 eef00c1 7c8c9c6 58077b4 db0e3e3 eef00c1 db0e3e3 cba3df0 951efd7 7fa30e4 df9474a db0e3e3 58077b4 7fa30e4 58077b4 af0b9cd 58077b4 71be988 58077b4 7c8c9c6 eef00c1 90c9712 eef00c1 e605c28 eef00c1 7c8c9c6 eef00c1 f4bbbab eef00c1 df9474a db0e3e3 88165c5 eef00c1 db0e3e3 bff43a4 db0e3e3 eef00c1 f4b9e09 c495ad5 db0e3e3 951efd7 db0e3e3 f4bbbab eef00c1 db0e3e3 7c8c9c6 f4bbbab 7c8c9c6 f4bbbab 7c8c9c6 f4bbbab 7c8c9c6 f4bbbab 7c8c9c6 db0e3e3 f4bbbab 7c8c9c6 f4bbbab db0e3e3 7c8c9c6 db0e3e3 f4bbbab |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 |
---
datasets:
- NeelNanda/pile-10k
base_model:
- deepseek-ai/DeepSeek-V3
---
## Model Details
This model is an int4 model with group_size 128 and symmetric quantization of [deepseek-ai/DeepSeek-V3](https://huggingface.co/deepseek-ai/DeepSeek-V3) generated by [intel/auto-round](https://github.com/intel/auto-round) algorithm.
**Loading the model in Transformers can be quite slow, especially with CUDA devices(30m-1hours). Consider using an alternative serving framework (some frameworks have overflow issues).** However, we have not tested it on other frameworks due to limited cuda resources.
Please follow the license of the original model.
## How To Use
**INT4 Inference on CUDA**(**at least 7*80G**)
On CUDA devices, the computation dtype is typically FP16 for int4 , which may lead to overflow for this model.
While we have added a workaround to address this issue, we cannot guarantee reliable performance for all prompts.
**For better stability, using CPU version is recommended. Please refer to the following section for details.**
~~~python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
quantized_model_dir = "OPEA/DeepSeek-V3-int4-sym-gptq-inc"
## directly use device_map='auto' if you have enough GPUs
device_map = {"model.norm":0,"lm_head":0,"model.embed_tokens":0}
for i in range(61):
name = "model.layers." + str(i)
if i < 8:
device_map[name] = 0
elif i < 16:
device_map[name] = 1
elif i < 25:
device_map[name] = 2
elif i <34:
device_map[name] = 3
elif i <43:
device_map[name] = 4
elif i < 52:
device_map[name] = 5
elif i < 61:
device_map[name] = 6
model = AutoModelForCausalLM.from_pretrained(
quantized_model_dir,
torch_dtype=torch.float16,
trust_remote_code=True,
device_map=device_map,
)
tokenizer = AutoTokenizer.from_pretrained(quantized_model_dir, trust_remote_code=True)
prompts = [
"9.11和9.8哪个数字大",
"strawberry中有几个r?",
"How many r in strawberry.",
"There is a girl who likes adventure,",
"Please give a brief introduction of DeepSeek company.",
"hello"
]
texts=[]
for prompt in prompts:
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
texts.append(text)
inputs = tokenizer(texts, return_tensors="pt", padding=True, truncation=True)
outputs = model.generate(
input_ids=inputs["input_ids"].to(model.device),
attention_mask=inputs["attention_mask"].to(model.device),
max_length=200, ##change this to align with the official usage
num_return_sequences=1,
do_sample=False ##change this to align with the official usage
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(inputs["input_ids"], outputs)
]
decoded_outputs = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
for i, prompt in enumerate(prompts):
input_id = inputs
print(f"Prompt: {prompt}")
print(f"Generated: {decoded_outputs[i]}")
print("-" * 50)
"""
Prompt: 9.11和9.8哪个数字大
Generated: 要比较 **9.11** 和 **9.8** 的大小,可以按照以下步骤进行:
1. **比较整数部分**:
- 两个数的整数部分都是 **9**,因此整数部分相同。
2. **比较小数部分**:
- **9.11** 的小数部分是 **0.11**
- **9.8** 的小数部分是 **0.8**
3. **统一小数位数**:
- 将 **0.8** 转换为 **0.80**,以便于比较。
4. **进行大小比较**:
- **0.80** > **0.11**
因此,**9.8** 大于 **9.11**。
最终答案:\boxed{9.8}
--------------------------------------------------
--------------------------------------------------
Prompt: strawberry中有几个r?
Generated: ### 第一步:理解问题
首先,我需要明确问题的含义。问题是:“strawberry中有几个r?”。这里的“strawberry”是一个英文单词,意思是“草莓”。问题问的是这个单词中有多少个字母“r”。
### 第二步:分解单词
为了找出“strawberry”中有多少个“r”,我需要将这个单词分解成单个字母。让我们逐个字母来看:
- s
# 2023年10月浙江宁波市鄞州区第二医院医共体首南分院编外人员招考聘用笔试历年高频考点(难、易错点荟萃)附带答案详解.docx
## 2023年10月浙江宁波市鄞州区第二医院医共体首南分院编外人员招考聘用笔试历年高频考点(难、易错点荟萃)附带答案详解.docx
- 4、
--------------------------------------------------
Prompt: How many r in strawberry.
Generated: The word "strawberry" contains **3 "r"s.
--------------------------------------------------
Prompt: There is a girl who likes adventure,
Generated: That's wonderful! A girl who loves adventure is likely curious, brave, and eager to explore new experiences. Here are some ideas to fuel her adventurous spirit:
### Outdoor Adventures:
1. **Hiking**: Explore local trails, national parks, or even plan a multi-day trek.
2. **Camping**: Spend a night under the stars, roast marshmallows, and tell stories around a campfire.
3. **Rock Climbing**: Challenge herself with indoor or outdoor climbing.
4. **Kayaking or Canoeing**: Paddle through rivers, lakes, or even the ocean.
5. **Zip-lining**: Soar through the treetops for an adrenaline rush.
### Travel Adventures:
1. **Road Trips**: Plan a trip to a new city or state, stopping at interesting landmarks along the way.
2. **Backpacking**: Travel light and explore
--------------------------------------------------
Prompt: Please give a brief introduction of DeepSeek company.
Generated: DeepSeek Artificial Intelligence Co., Ltd. (referred to as "DeepSeek" or "深度求索") , founded in 2023, is a Chinese company dedicated to making AGI a reality.
--------------------------------------------------
Prompt: hello
Generated: Hello! How can I assist you today? 😊
"""
~~~
### INT4 Inference on CPU with ITREX(Recommended)
**pip3 install auto-round** (it will install intel-extension-for-pytorch and intel-extension-for-transformers both). For intel cpu, it will prioritize using intel-extension-for-pytorch , for other cpus, it will prioritize using intel-extension-for-transformers.
**To make sure to use qbits with intel-extension-for-transformers, please uninstall intel-extension-for-pytorch**
intel-extension-for-transformers: faster repacking, slower inference,higher accuracy
intel-extension-for-pytorch: much slower repacking, faster inference, lower accuracy
~~~python
from auto_round import AutoRoundConfig ##must import for autoround format
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
quantized_model_dir = "OPEA/DeepSeek-V3-int4-sym-gptq-inc"
quantization_config = AutoRoundConfig(
backend="cpu"
)
model = AutoModelForCausalLM.from_pretrained(
quantized_model_dir,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="cpu",
revision="8fe0735",##use autoround format, the only difference is config.json
quantization_config = quantization_config, ##cpu only machine don't need to set this value
)
tokenizer = AutoTokenizer.from_pretrained(quantized_model_dir,trust_remote_code=True)
prompt = "There is a girl who likes adventure,"
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
model_inputs.input_ids,
max_new_tokens=200, ##change this to align with the official usage
do_sample=False ##change this to align with the official usage
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
prompt = "9.11和9.8哪个数字大"
##INT4
"""要比较 **9.11** 和 **9.8** 的大小,可以按照以下步骤进行:
1. **比较整数部分**:
- 两个数的整数部分都是 **9**,所以整数部分相同。
2. **比较小数部分**:
- **9.11** 的小数部分是 **0.11**
- **9.8** 的小数部分是 **0.8**(即 **0.80**)
3. **分析小数部分**:
- **0.80** 大于 **0.11**
因此,**9.8** 大于 **9.11**。
最终答案:\boxed{9.8}
"""
prompt = "strawberry中有几个r?"
##INT4
"""
### 第一步:理解问题
首先,我需要明确问题的含义。问题是:“strawberry中有几个r?”。这里的“strawberry”是一个英文单词,意思是“草莓”。问题问的是这个单 词中有多少个字母“r”。
### 第二步:分解单词
为了找出“strawberry”中有多少个“r”,我需要将这个单词分解成单个字母。让我们逐个字母来看:
- s
- t
- r
- a
- w
- b
- e
- r
- r
- y
### 第三步:识别字母“r”
现在,我需要找出这些字母中哪些是“r”。让我们逐一检查:
1. s - 不是r
2. t - 不是r
3. r - 是r
4. a - 不是r
5. w - 不是r
6. b - 不是r
7. e - 不是r
8. r - 是r
"""
prompt = "How many r in strawberry."
##INT4
"""The word "strawberry" contains **3 "r"s.
"""
prompt = "There is a girl who likes adventure,"
##INT4:
"""That's wonderful! A girl who loves adventure is likely curious, brave, and eager to explore the world around her. Here are some ideas to fuel her adventurous spirit:
### **Outdoor Adventures**
- **Hiking:** Explore local trails, national parks, or mountains.
- **Camping:** Spend a night under the stars and connect with nature.
- **Rock Climbing:** Challenge herself with bouldering or climbing walls.
- **Kayaking/Canoeing:** Paddle through rivers, lakes, or even the ocean.
- **Zip-lining:** Soar through the treetops for an adrenaline rush.
### **Travel Adventures**
- **Road Trips:** Plan a journey to new cities or scenic destinations.
- **Backpacking:** Travel light and explore different cultures and landscapes.
- **Volunteer Abroad:** Combine adventure with helping others in a new country.
### **Creative Adventures**
- **Photography:** Capture the beauty
"""
prompt = "Please give a brief introduction of DeepSeek company."
##INT4:
"""DeepSeek Artificial Intelligence Co., Ltd. (referred to as "DeepSeek" or "深度求索") , founded in 2023, is a Chinese company dedicated to making AGI a reality"""
~~~
### Evaluate the model
we have no enough resource to evaluate the model
### Generate the model
**5*80G gpu is needed(could optimize), 1.4T cpu memory is needed**
We discovered that the inputs and outputs of certain layers in this model are very large and even exceed the FP16 range when tested with a few prompts. It is recommended to exclude these layers from quantization—particularly the 'down_proj' in layer 60—and run them using BF16 precision instead. However, we have not implemented this in this int4 model as in cpu, the compute dtype for int4 is bf16 or FP32.
~~~python
model.layers.60.mlp.experts.150.down_proj tensor(1144.) tensor(2122.9451)
model.layers.60.mlp.experts.231.down_proj tensor(25856.) tensor(12827.9980)
model.layers.60.mlp.shared_experts.down_proj tensor(1880.) tensor(3156.7344)
model.layers.60.mlp.experts.81.down_proj tensor(4416.) tensor(6124.6846)
model.layers.60.mlp.experts.92.down_proj tensor(107520.) tensor(50486.0781)
model.layers.59.mlp.experts.138.down_proj tensor(1568.) tensor(190.8769)
model.layers.60.mlp.experts.81.down_proj tensor(7360.) tensor(10024.4531)
model.layers.60.mlp.experts.92.down_proj tensor(116224.) tensor(55192.4180)
~~~
**1 add meta data to bf16 model** https://huggingface.co/opensourcerelease/DeepSeek-V3-bf16
~~~python
import safetensors
from safetensors.torch import save_file
for i in range(1, 164):
idx_str = "0" * (5-len(str(i))) + str(i)
safetensors_path = f"model-{idx_str}-of-000163.safetensors"
print(safetensors_path)
tensors = dict()
with safetensors.safe_open(safetensors_path, framework="pt") as f:
for key in f.keys():
tensors[key] = f.get_tensor(key)
save_file(tensors, safetensors_path, metadata={'format': 'pt'})
~~~
**2 replace the modeling_deepseek.py with the following file**, basically align device and remove torch.no_grad as we need some tuning in AutoRound.
https://github.com/intel/auto-round/blob/deepseekv3/modeling_deepseek.py
**3 tuning**
```bash
git clone https://github.com/intel/auto-round.git && cd auto-round && git checkout deepseekv3
```
```bash
python3 -m auto_round --model "/models/DeepSeek-V3-bf16/" --group_size 128 --format "auto_gptq" --iters 200 --devices 0,1,2,3,4 --nsamples 512 --batch_size 8 --seqlen 512 --low_gpu_mem_usage --output_dir "tmp_autoround" --disable_eval e 2>&1 | tee -a seekv3.txt
```
## Ethical Considerations and Limitations
The model can produce factually incorrect output, and should not be relied on to produce factually accurate information. Because of the limitations of the pretrained model and the finetuning datasets, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
Therefore, before deploying any applications of the model, developers should perform safety testing.
## Caveats and Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
Here are a couple of useful links to learn more about Intel's AI software:
- Intel Neural Compressor [link](https://github.com/intel/neural-compressor)
## Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes.
## Cite
@article{cheng2023optimize, title={Optimize weight rounding via signed gradient descent for the quantization of llms}, author={Cheng, Wenhua and Zhang, Weiwei and Shen, Haihao and Cai, Yiyang and He, Xin and Lv, Kaokao and Liu, Yi}, journal={arXiv preprint arXiv:2309.05516}, year={2023} }
[arxiv](https://arxiv.org/abs/2309.05516) [github](https://github.com/intel/auto-round) |