RichardErkhov commited on
Commit
6551bba
·
verified ·
1 Parent(s): fd26085

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +128 -0
README.md ADDED
@@ -0,0 +1,128 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ legendary-river-koalpaca - GGUF
11
+ - Model creator: https://huggingface.co/ahnyeonchan/
12
+ - Original model: https://huggingface.co/ahnyeonchan/legendary-river-koalpaca/
13
+
14
+
15
+ | Name | Quant method | Size |
16
+ | ---- | ---- | ---- |
17
+ | [legendary-river-koalpaca.Q2_K.gguf](https://huggingface.co/RichardErkhov/ahnyeonchan_-_legendary-river-koalpaca-gguf/blob/main/legendary-river-koalpaca.Q2_K.gguf) | Q2_K | 0.56GB |
18
+ | [legendary-river-koalpaca.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/ahnyeonchan_-_legendary-river-koalpaca-gguf/blob/main/legendary-river-koalpaca.Q3_K_S.gguf) | Q3_K_S | 0.64GB |
19
+ | [legendary-river-koalpaca.Q3_K.gguf](https://huggingface.co/RichardErkhov/ahnyeonchan_-_legendary-river-koalpaca-gguf/blob/main/legendary-river-koalpaca.Q3_K.gguf) | Q3_K | 0.7GB |
20
+ | [legendary-river-koalpaca.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/ahnyeonchan_-_legendary-river-koalpaca-gguf/blob/main/legendary-river-koalpaca.Q3_K_M.gguf) | Q3_K_M | 0.7GB |
21
+ | [legendary-river-koalpaca.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/ahnyeonchan_-_legendary-river-koalpaca-gguf/blob/main/legendary-river-koalpaca.Q3_K_L.gguf) | Q3_K_L | 0.73GB |
22
+ | [legendary-river-koalpaca.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/ahnyeonchan_-_legendary-river-koalpaca-gguf/blob/main/legendary-river-koalpaca.IQ4_XS.gguf) | IQ4_XS | 0.74GB |
23
+ | [legendary-river-koalpaca.Q4_0.gguf](https://huggingface.co/RichardErkhov/ahnyeonchan_-_legendary-river-koalpaca-gguf/blob/main/legendary-river-koalpaca.Q4_0.gguf) | Q4_0 | 0.77GB |
24
+ | [legendary-river-koalpaca.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/ahnyeonchan_-_legendary-river-koalpaca-gguf/blob/main/legendary-river-koalpaca.IQ4_NL.gguf) | IQ4_NL | 0.77GB |
25
+ | [legendary-river-koalpaca.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/ahnyeonchan_-_legendary-river-koalpaca-gguf/blob/main/legendary-river-koalpaca.Q4_K_S.gguf) | Q4_K_S | 0.8GB |
26
+ | [legendary-river-koalpaca.Q4_K.gguf](https://huggingface.co/RichardErkhov/ahnyeonchan_-_legendary-river-koalpaca-gguf/blob/main/legendary-river-koalpaca.Q4_K.gguf) | Q4_K | 0.86GB |
27
+ | [legendary-river-koalpaca.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/ahnyeonchan_-_legendary-river-koalpaca-gguf/blob/main/legendary-river-koalpaca.Q4_K_M.gguf) | Q4_K_M | 0.86GB |
28
+ | [legendary-river-koalpaca.Q4_1.gguf](https://huggingface.co/RichardErkhov/ahnyeonchan_-_legendary-river-koalpaca-gguf/blob/main/legendary-river-koalpaca.Q4_1.gguf) | Q4_1 | 0.84GB |
29
+ | [legendary-river-koalpaca.Q5_0.gguf](https://huggingface.co/RichardErkhov/ahnyeonchan_-_legendary-river-koalpaca-gguf/blob/main/legendary-river-koalpaca.Q5_0.gguf) | Q5_0 | 0.92GB |
30
+ | [legendary-river-koalpaca.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/ahnyeonchan_-_legendary-river-koalpaca-gguf/blob/main/legendary-river-koalpaca.Q5_K_S.gguf) | Q5_K_S | 0.94GB |
31
+ | [legendary-river-koalpaca.Q5_K.gguf](https://huggingface.co/RichardErkhov/ahnyeonchan_-_legendary-river-koalpaca-gguf/blob/main/legendary-river-koalpaca.Q5_K.gguf) | Q5_K | 0.98GB |
32
+ | [legendary-river-koalpaca.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/ahnyeonchan_-_legendary-river-koalpaca-gguf/blob/main/legendary-river-koalpaca.Q5_K_M.gguf) | Q5_K_M | 0.98GB |
33
+ | [legendary-river-koalpaca.Q5_1.gguf](https://huggingface.co/RichardErkhov/ahnyeonchan_-_legendary-river-koalpaca-gguf/blob/main/legendary-river-koalpaca.Q5_1.gguf) | Q5_1 | 1.0GB |
34
+ | [legendary-river-koalpaca.Q6_K.gguf](https://huggingface.co/RichardErkhov/ahnyeonchan_-_legendary-river-koalpaca-gguf/blob/main/legendary-river-koalpaca.Q6_K.gguf) | Q6_K | 1.15GB |
35
+ | [legendary-river-koalpaca.Q8_0.gguf](https://huggingface.co/RichardErkhov/ahnyeonchan_-_legendary-river-koalpaca-gguf/blob/main/legendary-river-koalpaca.Q8_0.gguf) | Q8_0 | 1.4GB |
36
+
37
+
38
+
39
+
40
+ Original model description:
41
+ ---
42
+ license: apache-2.0
43
+ language:
44
+ - ko
45
+ pipeline_tag: text-generation
46
+ tags:
47
+ - instruction_ft
48
+ ---
49
+
50
+ We built this modle based on princeton-nlp/Sheared-LLaMA-1.3B.
51
+ We finetuned the model using korean wiki, ko alpaca with Lora.
52
+
53
+ Please see following information about princeton-nlp/Sheared-LLaMA-1.3B.
54
+
55
+
56
+
57
+ **Paper**: [https://arxiv.org/pdf/2310.06694.pdf](https://arxiv.org/pdf/2310.06694.pdf)
58
+ **Code**: https://github.com/princeton-nlp/LLM-Shearing
59
+ **Models**: [Sheared-LLaMA-1.3B](https://huggingface.co/princeton-nlp/Sheared-LLaMA-1.3B), [Sheared-LLaMA-2.7B](https://huggingface.co/princeton-nlp/Sheared-LLaMA-2.7B)
60
+ **Pruned Models without Continued Pre-training**: [Sheared-LLaMA-1.3B-Pruned](https://huggingface.co/princeton-nlp/Sheared-LLaMA-1.3B-Pruned), [Sheared-LLaMA-2.7B-Pruned](https://huggingface.co/princeton-nlp/Sheared-LLaMA-2.7B-Pruned)
61
+ **Instruction-tuned Models**: [Sheared-LLaMA-1.3B-ShareGPT](https://huggingface.co/princeton-nlp/Sheared-LLaMA-1.3B-ShareGPT), [Sheared-LLaMA-2.7B-ShareGPT](https://huggingface.co/princeton-nlp/Sheared-LLaMA-2.7B-ShareGPT)
62
+
63
+ **License**: Must comply with license of Llama2 since it's a model derived from Llama2.
64
+
65
+ ---
66
+
67
+
68
+ Sheared-LLaMA-1.3B is a model pruned and further pre-trained from [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf). We dynamically load data from different domains in the [RedPajama dataset](https://github.com/togethercomputer/RedPajama-Data) to prune and contune pre-train the model. We use 0.4B tokens for pruning and 50B tokens for continued pre-training the pruned model. This model can be loaded with HuggingFace via
69
+
70
+ ```
71
+ model = AutoModelForCausalLM.from_pretrained("princeton-nlp/Sheared-LLaMA-1.3B")
72
+ ```
73
+
74
+ - Smaller-scale
75
+ - Same vocabulary as LLaMA1 and LLaMA2
76
+ - Derived with a budget of 50B tokens by utilizing existing strong LLMs
77
+
78
+ ## Downstream Tasks
79
+
80
+ We evaluate on an extensive set of downstream tasks including reasoning, reading comprehension, language modeling and knowledge intensive tasks. Our Sheared-LLaMA models outperform existing large language models.
81
+
82
+ | Model | # Pre-training Tokens | Average Performance |
83
+ | ------------------- | --------------------- | ------------------- |
84
+ | LLaMA2-7B | 2T | 64.6 |
85
+
86
+ **1.3B**
87
+
88
+ | Model | # Pre-training Tokens | Average Performance |
89
+ | ------------------- | --------------------- | ------------------- |
90
+ | OPT-1.3B | 300B | 48.2 |
91
+ | Pythia-1.4B | 300B | 48.9 |
92
+ | **Sheared-LLaMA-1.3B** | **50B** | **51.0** |
93
+
94
+ **3B**
95
+
96
+ | Model | # Pre-training Tokens | Average Performance |
97
+ | ------------------- | --------------------- | ------------------- |
98
+ | OPT-2.7B | 300B | 51.4 |
99
+ | Pythia-2.8B | 300B | 52.5 |
100
+ | INCITE-Base-3B | 800B | 54.7 |
101
+ | Open-LLaMA-3B-v1 | 1T | 55.1 |
102
+ | Open-LLaMA-3B-v2 | 1T | 55.7 |
103
+ | Sheared-LLaMA-2.7B | 50B | 56.7 |
104
+
105
+ ## Bibtex
106
+ ```
107
+ @article{xia2023sheared,
108
+ title={Sheared llama: Accelerating language model pre-training via structured pruning},
109
+ author={Xia, Mengzhou and Gao, Tianyu and Zeng, Zhiyuan and Chen, Danqi},
110
+ journal={arXiv preprint arXiv:2310.06694},
111
+ year={2023}
112
+ }
113
+ ```
114
+
115
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
116
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_princeton-nlp__Sheared-LLaMA-1.3B)
117
+
118
+ | Metric | Value |
119
+ |-----------------------|---------------------------|
120
+ | Avg. | 31.47 |
121
+ | ARC (25-shot) | 32.85 |
122
+ | HellaSwag (10-shot) | 60.91 |
123
+ | MMLU (5-shot) | 25.71 |
124
+ | TruthfulQA (0-shot) | 37.14 |
125
+ | Winogrande (5-shot) | 58.64 |
126
+ | GSM8K (5-shot) | 0.45 |
127
+ | DROP (3-shot) | 4.56 |
128
+