uploaded readme
Browse files
README.md
ADDED
@@ -0,0 +1,40 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Quantization made by Richard Erkhov.
|
2 |
+
|
3 |
+
[Github](https://github.com/RichardErkhov)
|
4 |
+
|
5 |
+
[Discord](https://discord.gg/pvy7H8DZMG)
|
6 |
+
|
7 |
+
[Request more models](https://github.com/RichardErkhov/quant_request)
|
8 |
+
|
9 |
+
|
10 |
+
vicuna-68m - bnb 8bits
|
11 |
+
- Model creator: https://huggingface.co/double7/
|
12 |
+
- Original model: https://huggingface.co/double7/vicuna-68m/
|
13 |
+
|
14 |
+
|
15 |
+
|
16 |
+
|
17 |
+
Original model description:
|
18 |
+
---
|
19 |
+
license: apache-2.0
|
20 |
+
datasets:
|
21 |
+
- anon8231489123/ShareGPT_Vicuna_unfiltered
|
22 |
+
language:
|
23 |
+
- en
|
24 |
+
pipeline_tag: text-generation
|
25 |
+
---
|
26 |
+
## Model description
|
27 |
+
This is a Vicuna-like model with only 68M parameters, which is fine-tuned from [LLaMA-68m](https://huggingface.co/JackFram/llama-68m) on [ShareGPT](https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered) data.
|
28 |
+
|
29 |
+
The training setup follows the [Vicuna suite](https://github.com/lm-sys/FastChat).
|
30 |
+
|
31 |
+
The model is mainly developed as a base Small Speculative Model in the [MCSD paper](https://arxiv.org/pdf/2401.06706.pdf). As a comparison, it can be better aligned to the Vicuna models than LLaMA-68m with little loss of alignment to the LLaMA models.
|
32 |
+
|
33 |
+
|
34 |
+
| Draft Model | Target Model | Alignment |
|
35 |
+
| -------------- | ------------- | --------- |
|
36 |
+
| LLaMA-68/160M | LLaMA-13/33B | π |
|
37 |
+
| LLaMA-68/160M | Vicuna-13/33B | π |
|
38 |
+
| Vicuna-68/160M | LLaMA-13/33B | π |
|
39 |
+
| Vicuna-68/160M | Vicuna-13/33B | π |
|
40 |
+
|