justheuristic
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -16,8 +16,8 @@ For this quantization, we used 1 codebook of 16 bits for groups of 8 weights.
|
|
16 |
|
17 |
| Model | AQLM scheme | WikiText 2 PPL | Model size, Gb | Hub link |
|
18 |
|------------|-------------|----------------|----------------|--------------------------------------------------------------------------|
|
19 |
-
| meta-llama/Meta-Llama-3-8B
|
20 |
-
| meta-llama/Meta-Llama-3-70B | 1x16g8 | 4.57 | 21.9 | [Link](https://huggingface.co/ISTA-DASLab/Llama-2-70b-AQLM-PV-2Bit-1x16-hf)|
|
21 |
|
22 |
The 1x16g16 (1-bit) models are on the way, as soon as we update the inference lib with their respective kernels.
|
23 |
|
|
|
16 |
|
17 |
| Model | AQLM scheme | WikiText 2 PPL | Model size, Gb | Hub link |
|
18 |
|------------|-------------|----------------|----------------|--------------------------------------------------------------------------|
|
19 |
+
| meta-llama/Meta-Llama-3-8B | 1x16g8 | 6.99 | 4.1 | [Link](https://huggingface.co/ISTA-DASLab/Meta-Llama-3-8B-AQLM-PV-2Bit-1x16) |
|
20 |
+
| meta-llama/Meta-Llama-3-70B (this) | 1x16g8 | 4.57 | 21.9 | [Link](https://huggingface.co/ISTA-DASLab/Llama-2-70b-AQLM-PV-2Bit-1x16-hf)|
|
21 |
|
22 |
The 1x16g16 (1-bit) models are on the way, as soon as we update the inference lib with their respective kernels.
|
23 |
|