cicdatopea commited on
Commit
58077b4
·
verified ·
1 Parent(s): 44f6bc8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +157 -3
README.md CHANGED
@@ -5,20 +5,174 @@ base_model:
5
  - deepseek-ai/DeepSeek-V3
6
 
7
 
 
8
  ---
9
 
10
  ## Model Details
11
 
12
  This model is an int4 model with group_size 128 and symmetric quantization of [deepseek-ai/DeepSeek-V3](https://huggingface.co/deepseek-ai/DeepSeek-V3) generated by [intel/auto-round](https://github.com/intel/auto-round) algorithm.
13
 
14
- **On CUDA devices, this model is prone to overflow caused by the INT4 kernel using the FP16 computation dtype. Additionally, loading the model in Transformers can be quite slow. Consider using an alternative serving framework capable of running INT4 models with the BF16 computation dtype.**
15
-
16
- Due to limited GPU resources, we have only tested a few prompts on a CPU backend with intel-extension-for-transformers . If this model does not meet your performance expectations, you may explore another quantized model in AWQ format, generated via AutoRound with different hyperparameters. This alternative model will be uploaded soon.
17
 
18
  Please follow the license of the original model.
19
 
20
  ## How To Use
21
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
22
  ### INT4 Inference on CPU with ITREX(Recommended)
23
 
24
  **pip3 install auto-round** (it will install intel-extension-for-pytorch and intel-extension-for-transformers both). For intel cpu, it will prioritize using intel-extension-for-pytorch , for other cpus, it will prioritize using intel-extension-for-transformers.
 
5
  - deepseek-ai/DeepSeek-V3
6
 
7
 
8
+
9
  ---
10
 
11
  ## Model Details
12
 
13
  This model is an int4 model with group_size 128 and symmetric quantization of [deepseek-ai/DeepSeek-V3](https://huggingface.co/deepseek-ai/DeepSeek-V3) generated by [intel/auto-round](https://github.com/intel/auto-round) algorithm.
14
 
15
+ **Loading the model in Transformers can be quite slow, especially with CUDA devices(30m-1hours). Consider using an alternative serving framework. ** However, we have not tested it on other frameworks due to limited cuda resources.
 
 
16
 
17
  Please follow the license of the original model.
18
 
19
  ## How To Use
20
 
21
+ **INT4 Inference on CUDA**(**at least 7*80G**)
22
+
23
+ On CUDA devices, the computation dtype is typically FP16 for int4 , which may lead to overflow for this model. While we have added a workaround to address this issue, we cannot guarantee reliable performance for all prompts. **For better stability, using CPU version is recommended. Please refer to the following section for details.**
24
+
25
+ ~~~python
26
+ from transformers import AutoModelForCausalLM, AutoTokenizer
27
+ import torch
28
+
29
+ quantized_model_dir = "/dataset/int4_models/DeepSeek-V3-int4-sym-gptq-inc-preview"
30
+
31
+ ## directly use device_map='auto' if you have enough GPUs
32
+ max_memory = {i: "75GiB" for i in range(7)}
33
+ device_map = {"model.norm":0,"lm_head":0,"model.embed_tokens":0}
34
+ for i in range(61):
35
+ name = "model.layers." + str(i)
36
+ if i < 8:
37
+ device_map[name] = 0
38
+ elif i < 16:
39
+ device_map[name] = 1
40
+ elif i < 25:
41
+ device_map[name] = 2
42
+ elif i <34:
43
+ device_map[name] = 3
44
+ elif i <43:
45
+ device_map[name] = 4
46
+ elif i < 52:
47
+ device_map[name] = 5
48
+ elif i < 61:
49
+ device_map[name] = 6
50
+
51
+ model = AutoModelForCausalLM.from_pretrained(
52
+ quantized_model_dir,
53
+ torch_dtype=torch.float16,
54
+ trust_remote_code=True,
55
+ device_map=device_map,
56
+ )
57
+
58
+ tokenizer = AutoTokenizer.from_pretrained(quantized_model_dir, trust_remote_code=True)
59
+ prompts = [
60
+ "9.11和9.8哪个数字大",
61
+ "strawberry中有几个r?",
62
+ "How many r in strawberry.",
63
+ "There is a girl who likes adventure,",
64
+ "Please give a brief introduction of DeepSeek company.",
65
+ "hello"
66
+ ]
67
+
68
+ texts=[]
69
+ for prompt in prompts:
70
+ messages = [
71
+ {"role": "system", "content": "You are a helpful assistant."},
72
+ {"role": "user", "content": prompt}
73
+ ]
74
+ text = tokenizer.apply_chat_template(
75
+ messages,
76
+ tokenize=False,
77
+ add_generation_prompt=True
78
+ )
79
+ texts.append(text)
80
+ inputs = tokenizer(texts, return_tensors="pt", padding=True, truncation=True)
81
+
82
+ outputs = model.generate(
83
+ input_ids=inputs["input_ids"].to(model.device),
84
+ attention_mask=inputs["attention_mask"].to(model.device),
85
+ max_length=200, ##change this to align with the official usage
86
+ num_return_sequences=1, ##change this to align with the official usage
87
+ do_sample=False
88
+ )
89
+ generated_ids = [
90
+ output_ids[len(input_ids):] for input_ids, output_ids in zip(inputs["input_ids"], outputs)
91
+ ]
92
+
93
+ decoded_outputs = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
94
+
95
+ for i, prompt in enumerate(prompts):
96
+ input_id = inputs
97
+ print(f"Prompt: {prompt}")
98
+ print(f"Generated: {decoded_outputs[i]}")
99
+ print("-" * 50)
100
+
101
+ """
102
+ Prompt: 9.11和9.8哪个数字大
103
+ Generated: 要比较 **9.11** 和 **9.8** 的大小,可以按照以下步骤进行:
104
+
105
+ 1. **比较整数部分**:
106
+ - 两个数的整数部分都是 **9**,因此整数部分相同。
107
+
108
+ 2. **比较小数部分**:
109
+ - **9.11** 的小数部分是 **0.11**
110
+ - **9.8** 的小数部分是 **0.8**
111
+
112
+ 3. **统一小数位数**:
113
+ - 将 **0.8** 转换为 **0.80**,以便于比较。
114
+
115
+ 4. **进行大小比较**:
116
+ - **0.80** > **0.11**
117
+
118
+ 因此,**9.8** 大于 **9.11**。
119
+
120
+ 最终答案:\boxed{9.8}
121
+ --------------------------------------------------
122
+
123
+ --------------------------------------------------
124
+ Prompt: strawberry中有几个r?
125
+ Generated: ### 第一步:理解问题
126
+
127
+ 首先,我需要明确问题的含义。问题是:“strawberry中有几个r?”。这里的“strawberry”是一个英文单词,意思是“草莓”。问题问的是这个单词中有多少个字母“r”。
128
+
129
+ ### 第二步:分解单词
130
+
131
+ 为了找出“strawberry”中有多少个“r”,我需要将这个单词分解成单个字母。让我们逐个字母来看:
132
+
133
+ - s
134
+ # 2023年10月浙江宁波市鄞州区第二医院医共体首南分院编外人员招考聘用笔试历年高频考点(难、易错点荟萃)附带答案详解.docx
135
+
136
+ ## 2023年10月浙江宁波市鄞州区第二医院医共体首南分院编外人员招考聘用笔试历年高频考点(难、易错点荟萃)附带答案详解.docx
137
+
138
+ - 4、
139
+ --------------------------------------------------
140
+ Prompt: How many r in strawberry.
141
+ Generated: The word "strawberry" contains **3 "r"s.
142
+ --------------------------------------------------
143
+ Prompt: There is a girl who likes adventure,
144
+ Generated: That's wonderful! A girl who loves adventure is likely curious, brave, and eager to explore new experiences. Here are some ideas to fuel her adventurous spirit:
145
+
146
+ ### Outdoor Adventures:
147
+ 1. **Hiking**: Explore local trails, national parks, or even plan a multi-day trek.
148
+ 2. **Camping**: Spend a night under the stars, roast marshmallows, and tell stories around a campfire.
149
+ 3. **Rock Climbing**: Challenge herself with indoor or outdoor climbing.
150
+ 4. **Kayaking or Canoeing**: Paddle through rivers, lakes, or even the ocean.
151
+ 5. **Zip-lining**: Soar through the treetops for an adrenaline rush.
152
+
153
+ ### Travel Adventures:
154
+ 1. **Road Trips**: Plan a trip to a new city or state, stopping at interesting landmarks along the way.
155
+ 2. **Backpacking**: Travel light and explore
156
+ --------------------------------------------------
157
+ Prompt: Please give a brief introduction of DeepSeek company.
158
+ Generated: DeepSeek Artificial Intelligence Co., Ltd. (referred to as "DeepSeek" or "深度求索") , founded in 2023, is a Chinese company dedicated to making AGI a reality.
159
+ --------------------------------------------------
160
+ Prompt: hello
161
+ Generated: Hello! How can I assist you today? 😊
162
+
163
+
164
+ """
165
+
166
+
167
+
168
+
169
+
170
+ ~~~
171
+
172
+
173
+
174
+
175
+
176
  ### INT4 Inference on CPU with ITREX(Recommended)
177
 
178
  **pip3 install auto-round** (it will install intel-extension-for-pytorch and intel-extension-for-transformers both). For intel cpu, it will prioritize using intel-extension-for-pytorch , for other cpus, it will prioritize using intel-extension-for-transformers.