OPEA
/

Safetensors
llama
4-bit precision
intel/auto-round
cicdatopea commited on
Commit
e9042f5
·
verified ·
1 Parent(s): 5b5e906

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -11,7 +11,7 @@ This model is an int4 model with group_size 128 and asymmetric quantization of [
11
 
12
  HPU: docker image with Gaudi Software Stack is recommended, please refer to following script for environment setup. More details can be found in [Gaudi Guide](https://docs.habana.ai/en/latest/Installation_Guide/Bare_Metal_Fresh_OS.html#launch-docker-image-that-was-built).
13
 
14
- CUDA(must install from souce): git clone https://github.com/intel/auto-round && cd auto-round && pip install -vvv --no-build-isolation -e .
15
 
16
  ```python
17
  from auto_round import AutoHfQuantizer ##must import
 
11
 
12
  HPU: docker image with Gaudi Software Stack is recommended, please refer to following script for environment setup. More details can be found in [Gaudi Guide](https://docs.habana.ai/en/latest/Installation_Guide/Bare_Metal_Fresh_OS.html#launch-docker-image-that-was-built).
13
 
14
+ **CUDA(must install from souce)**: git clone https://github.com/intel/auto-round && cd auto-round && pip install -vvv --no-build-isolation -e .
15
 
16
  ```python
17
  from auto_round import AutoHfQuantizer ##must import