nsfwthrowitaway69
commited on
Commit
·
2f3951c
1
Parent(s):
0a8cc2e
Update README.md
Browse files
README.md
CHANGED
@@ -12,7 +12,7 @@ tags:
|
|
12 |
|
13 |
## Model Details
|
14 |
|
15 |
-
- A result of interleaving layers of [Sao10K/Euryale-1.3-L2-70B](https://huggingface.co/Sao10K/Euryale-1.3-L2-70B), [migtissera/SynthIA-70B-v1.
|
16 |
- The resulting model has 120 layers and 103 billion parameters.
|
17 |
- See mergekit-config.yml for details on the merge method used.
|
18 |
- See the `exl2-*` branches for exllama2 quantizations. The 5.65 bpw quant should fit in 80GB VRAM, and the 3.35 bpw quant should fit in 48GB VRAM.
|
|
|
12 |
|
13 |
## Model Details
|
14 |
|
15 |
+
- A result of interleaving layers of [Sao10K/Euryale-1.3-L2-70B](https://huggingface.co/Sao10K/Euryale-1.3-L2-70B), [migtissera/SynthIA-70B-v1.2b](https://huggingface.co/migtissera/SynthIA-70B-v1.2b), and [Xwin-LM/Xwin-LM-70B-V0.1](https://huggingface.co/Xwin-LM/Xwin-LM-70B-V0.1) using [mergekit](https://github.com/cg123/mergekit).
|
16 |
- The resulting model has 120 layers and 103 billion parameters.
|
17 |
- See mergekit-config.yml for details on the merge method used.
|
18 |
- See the `exl2-*` branches for exllama2 quantizations. The 5.65 bpw quant should fit in 80GB VRAM, and the 3.35 bpw quant should fit in 48GB VRAM.
|