cicdatopea commited on
Commit
b18d838
·
verified ·
1 Parent(s): 60bc5a8

update download

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -9,7 +9,7 @@ base_model:
9
  This model is an int4 model with group_size 128 and and symmetric quantization of [deepseek-ai/DeepSeek-V2.5-1210](https://huggingface.co/deepseek-ai/DeepSeek-V2.5-1210) generated by [intel/auto-round](https://github.com/intel/auto-round) algorithm. Load the model with `revision="6d3d2cf"` to use AutoGPTQ format. **Please note that loading the model in Transformers can be quite slow. Consider using an alternative serving framework for better performance.**
10
  For other serving frameworks, the autogptq format is required. You can run the following command to fetch the model:
11
  ```bash
12
- git clone https://huggingface.co/OPEA/DeepSeek-V2.5-1210-int4-sym-inc && cd DeepSeek-V2.5-1210-int4-sym-inc && git checkout 6d3d2cf
13
  ```
14
 
15
  Please follow the license of the origin model.
 
9
  This model is an int4 model with group_size 128 and and symmetric quantization of [deepseek-ai/DeepSeek-V2.5-1210](https://huggingface.co/deepseek-ai/DeepSeek-V2.5-1210) generated by [intel/auto-round](https://github.com/intel/auto-round) algorithm. Load the model with `revision="6d3d2cf"` to use AutoGPTQ format. **Please note that loading the model in Transformers can be quite slow. Consider using an alternative serving framework for better performance.**
10
  For other serving frameworks, the autogptq format is required. You can run the following command to fetch the model:
11
  ```bash
12
+ huggingface-cli download OPEA/DeepSeek-V2.5-1210-int4-sym-inc --revision 6d3d2cf
13
  ```
14
 
15
  Please follow the license of the origin model.