bartowski commited on
Commit
19b8648
·
verified ·
1 Parent(s): 0f5ee7f

Quant for 4.0

Browse files
README.md CHANGED
@@ -1,77 +1,59 @@
1
  ---
2
  license: apache-2.0
3
- quantized_by: bartowski
4
- pipeline_tag: text-generation
5
  ---
6
-
7
- ## Exllama v2 Quantizations of Mistral-22B-v0.2
8
-
9
- Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.18">turboderp's ExLlamaV2 v0.0.18</a> for quantization.
10
-
11
- <b>The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)</b>
12
-
13
- Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
14
-
15
- Conversion was done using the default calibration dataset.
16
-
17
- Default arguments used except when the bits per weight is above 6.0, at that point the lm_head layer is quantized at 8 bits per weight instead of the default 6.
18
-
19
- Original model: https://huggingface.co/Vezora/Mistral-22B-v0.2
20
-
21
- ## Prompt Format
22
-
23
- ```
24
- ### System: {system_prompt}
25
- ### Human: {prompt}
26
- ### Assistant:
27
- ```
28
-
29
- <a href="https://huggingface.co/bartowski/Mistral-22B-v0.2-exl2/tree/8_0">8.0 bits per weight</a>
30
-
31
- <a href="https://huggingface.co/bartowski/Mistral-22B-v0.2-exl2/tree/6_5">6.5 bits per weight</a>
32
-
33
- <a href="https://huggingface.co/bartowski/Mistral-22B-v0.2-exl2/tree/5_0">5.0 bits per weight</a>
34
-
35
- <a href="https://huggingface.co/bartowski/Mistral-22B-v0.2-exl2/tree/4_25">4.25 bits per weight</a>
36
-
37
- <a href="https://huggingface.co/bartowski/Mistral-22B-v0.2-exl2/tree/3_5">3.5 bits per weight</a>
38
-
39
- <a href="https://huggingface.co/bartowski/Mistral-22B-v0.2-exl2/tree/3_0">3.0 bits per weight</a>
40
-
41
-
42
- ## Download instructions
43
-
44
- With git:
45
-
46
- ```shell
47
- git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/Mistral-22B-v0.2-exl2
48
- ```
49
-
50
- With huggingface hub (credit to TheBloke for instructions):
51
-
52
- ```shell
53
- pip3 install huggingface-hub
54
- ```
55
-
56
- To download the `main` (only useful if you only care about measurement.json) branch to a folder called `Mistral-22B-v0.2-exl2`:
57
-
58
- ```shell
59
- mkdir Mistral-22B-v0.2-exl2
60
- huggingface-cli download bartowski/Mistral-22B-v0.2-exl2 --local-dir Mistral-22B-v0.2-exl2 --local-dir-use-symlinks False
61
- ```
62
-
63
- To download from a different branch, add the `--revision` parameter:
64
-
65
- Linux:
66
-
67
- ```shell
68
- mkdir Mistral-22B-v0.2-exl2-6_5
69
- huggingface-cli download bartowski/Mistral-22B-v0.2-exl2 --revision 6_5 --local-dir Mistral-22B-v0.2-exl2-6_5 --local-dir-use-symlinks False
70
- ```
71
-
72
- Windows (which apparently doesn't like _ in folders sometimes?):
73
-
74
- ```shell
75
- mkdir Mistral-22B-v0.2-exl2-6.5
76
- huggingface-cli download bartowski/Mistral-22B-v0.2-exl2 --revision 6_5 --local-dir Mistral-22B-v0.2-exl2-6.5 --local-dir-use-symlinks False
77
- ```
 
1
  ---
2
  license: apache-2.0
 
 
3
  ---
4
+ <img src="https://huggingface.co/Vezora/Mistral-22B-v0.1/resolve/main/unsloth.png" width="100" height="150" />
5
+
6
+ ### Mistral-22b-v.02 Release Announcement 🚀
7
+
8
+ ## This model is not an moe, it is infact a 22B parameter dense model!
9
+
10
+ **Date**: April 13
11
+ **Creator** [Nicolas Mejia-Petit](https://twitter.com/mejia_petit)
12
+
13
+ ### Overview
14
+ - Just two days after our release of **Mistral-22b-v0.1**, we are excited to introduce our handcrafted experimental model, **Mistral-22b-v.02**. This model is a culmination of equal knowledge distilled from all experts into a single, dense 22b model. This model is not a single trained expert, rather its a compressed MOE model, turning it into a dense 22b mode. This is the first working MOE to Dense model conversion.
15
+ - v0.2 has trained on 8x more data than v0.1!
16
+
17
+ ### Capabilities
18
+ - **Math Proficiency**: The model exhibits strong mathematical abilities. Dispite not being trained on math.
19
+ - **Better at Coding** The model is significantly better at coding, than V1, it passed some of my simple coding test, such as "Create a simple HTML site with a button that changes the background color to a random color", which V1 failed.
20
+ - **More Cohesive** This V2 model is significantly more cohesive, and better at aunderstanding the prompts and answering with the appopriate answer.
21
+ - **Highly Uncencored** Since this model was also Re-Alligned to be uncencored, it can just answer anything you ask. So use at your own risk, we take no responsibility for your generated responces.
22
+ - **Multi Turn** The dataset this model trained on was mostly all multi turn conversations, spanning many different topics, with some emphasis on coding.
23
+ - **Json Mode** I did train this model on answering in JSON and using JSON tools., I have yet to try it, in depth but preliminary test shows it works, including.
24
+ - **Agent abilities** I did train this model on agent datasets, that teach it to do real world tasks such as picking up an object, and even navigating a webpage based off HTML.
25
+ - **Good Chili Recipe** The model gives a good chili recipe :)
26
+ - **32k Sequence Length** This model was trained with a 32k sequence length.
27
+ - **GUANACO PROMPT FORMAT** YOU MUST USE THE GUANACO PROMPT FORMAT SHOWN BELLOW IN USAGE. Not using this prompt format will lead to sub optimal results.
28
+
29
+ ### Experimental Nature
30
+ Please note that Mistral-22b is still in a WIP. v0.3 has started training now, with a different method than used before, this is to hopefully make the model more round in its internel knowlledge. Through my testing I found V2 to be a significant improvement over v.1.
31
+
32
+ ### Upcoming Release: V.3
33
+ - v0.3 will feature a different base model for testing purposes, however this model is pretty darn good for a second test. :)
34
+ - I have done some preliminary results with my new v0.3 base model, and it appears to achieve a lower loss after the first epoch compared to the other base model used for v0.1 and v0.2. so we have started training v0.3 with the new base model and with the longer dataset, will be done and released in the next 48 hours. :)
35
+
36
+ ### Stay Updated
37
+ **V.3**, coming soon! And is currently training, will be done in the next ~24 hours. 🌟Paper Coming Soon🌟
38
+ - There will be more of these 22b models. They 5-6 siblings till I find what the best results are for MOE compression.
39
+ - However I am very surprised at how good this V.2 model is, off my small testing.
40
+ - I will be releasing a blog post soon on how I did this, I still will release a paper with testing and results, but I'm gonna rush out a paper before hand so I can share how I did this. I'd just like to make sure the right people get the right credit for their work that I used, so I have to read up some and make sure everyone gets the credit they deserve, ( and I need quality sleep my entire sleep schedule has been abominated since mixtrals drop.) I appreciate your understanding.
41
+ - I have a bunch of other methods I have yet to try, and many of those methods required me making this model, and running the initial tests, so they are only going to get better from here, I appretiate feedback, thank you!
42
+
43
+ ### Usage:
44
+ - This model requires a specific chat template, as the training format was Guanaco this is what it looks like:
45
+ - "### System: You are a helpful assistant. ### Human###: Give me the best chili recipe you can ###Assistant: Here is the best chili recipe..."
46
+
47
+
48
+ ## Thank you!
49
+ - Thank you to [Daniel Han](https://twitter.com/danielhanchen), for Unsloth AI which was used to train this model. this led to a 2-3x speed increae and 2-3x decrease in memmory consumption.
50
+ - Thank you to [Charles Goddard](https://twitter.com/chargoddard), for providng me with a script that was nessary to make this model.
51
+ - Thank you to Mistral, for releasing Another Wonderful open source model, under Apache 2.0.
52
+ - Thank you to [Tim Dettmers](https://twitter.com/Tim_Dettmers), for creating QLora
53
+ - Thank you to [Tri Dao](https://twitter.com/tri_dao), for creating Flash Attention
54
+ - Thank you to Microsoft, for the Lora paper, and the Slice-GPT paper.
55
+ - Thank you to the Hugging Face team, for everything.❤️ We really do appreciate you guys and all your hard work and commitment to the open source community!❤️
56
+ - Thank you to [Jon Durbin](https://x.com/jon_durbin?s=21) I used one of his DPO datasets converted to SFT, more info will be explained in paper.
57
+
58
+
59
+ ## Future plans, train 4-5 more of these experimental models gather preliminary testing results, and then run evaluations on all the models I see have the best possibilities of excelling, then use the best one.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
config.json ADDED
@@ -0,0 +1,36 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "MistralForCausalLM"
4
+ ],
5
+ "attention_dropout": 0.0,
6
+ "bos_token_id": 1,
7
+ "eos_token_id": 2,
8
+ "hidden_act": "silu",
9
+ "hidden_size": 6144,
10
+ "initializer_range": 0.02,
11
+ "intermediate_size": 16384,
12
+ "max_position_embeddings": 65536,
13
+ "model_type": "mistral",
14
+ "num_attention_heads": 48,
15
+ "num_hidden_layers": 56,
16
+ "num_key_value_heads": 8,
17
+ "rms_norm_eps": 1e-05,
18
+ "rope_theta": 1000000,
19
+ "sliding_window": null,
20
+ "tie_word_embeddings": false,
21
+ "torch_dtype": "bfloat16",
22
+ "transformers_version": "4.40.0.dev0",
23
+ "use_cache": true,
24
+ "vocab_size": 32000,
25
+ "quantization_config": {
26
+ "quant_method": "exl2",
27
+ "version": "0.0.18",
28
+ "bits": 4.0,
29
+ "head_bits": 6,
30
+ "calibration": {
31
+ "rows": 100,
32
+ "length": 2048,
33
+ "dataset": "(default)"
34
+ }
35
+ }
36
+ }
generation_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 1,
4
+ "eos_token_id": 2,
5
+ "transformers_version": "4.39.3"
6
+ }
model.safetensors.index.json ADDED
@@ -0,0 +1,514 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "metadata": {
3
+ "total_size": 44475691008
4
+ },
5
+ "weight_map": {
6
+ "lm_head.weight": "model-00009-of-00009.safetensors",
7
+ "model.embed_tokens.weight": "model-00001-of-00009.safetensors",
8
+ "model.layers.0.input_layernorm.weight": "model-00001-of-00009.safetensors",
9
+ "model.layers.0.mlp.down_proj.weight": "model-00001-of-00009.safetensors",
10
+ "model.layers.0.mlp.gate_proj.weight": "model-00001-of-00009.safetensors",
11
+ "model.layers.0.mlp.up_proj.weight": "model-00001-of-00009.safetensors",
12
+ "model.layers.0.post_attention_layernorm.weight": "model-00001-of-00009.safetensors",
13
+ "model.layers.0.self_attn.k_proj.weight": "model-00001-of-00009.safetensors",
14
+ "model.layers.0.self_attn.o_proj.weight": "model-00001-of-00009.safetensors",
15
+ "model.layers.0.self_attn.q_proj.weight": "model-00001-of-00009.safetensors",
16
+ "model.layers.0.self_attn.v_proj.weight": "model-00001-of-00009.safetensors",
17
+ "model.layers.1.input_layernorm.weight": "model-00001-of-00009.safetensors",
18
+ "model.layers.1.mlp.down_proj.weight": "model-00001-of-00009.safetensors",
19
+ "model.layers.1.mlp.gate_proj.weight": "model-00001-of-00009.safetensors",
20
+ "model.layers.1.mlp.up_proj.weight": "model-00001-of-00009.safetensors",
21
+ "model.layers.1.post_attention_layernorm.weight": "model-00001-of-00009.safetensors",
22
+ "model.layers.1.self_attn.k_proj.weight": "model-00001-of-00009.safetensors",
23
+ "model.layers.1.self_attn.o_proj.weight": "model-00001-of-00009.safetensors",
24
+ "model.layers.1.self_attn.q_proj.weight": "model-00001-of-00009.safetensors",
25
+ "model.layers.1.self_attn.v_proj.weight": "model-00001-of-00009.safetensors",
26
+ "model.layers.10.input_layernorm.weight": "model-00002-of-00009.safetensors",
27
+ "model.layers.10.mlp.down_proj.weight": "model-00002-of-00009.safetensors",
28
+ "model.layers.10.mlp.gate_proj.weight": "model-00002-of-00009.safetensors",
29
+ "model.layers.10.mlp.up_proj.weight": "model-00002-of-00009.safetensors",
30
+ "model.layers.10.post_attention_layernorm.weight": "model-00002-of-00009.safetensors",
31
+ "model.layers.10.self_attn.k_proj.weight": "model-00002-of-00009.safetensors",
32
+ "model.layers.10.self_attn.o_proj.weight": "model-00002-of-00009.safetensors",
33
+ "model.layers.10.self_attn.q_proj.weight": "model-00002-of-00009.safetensors",
34
+ "model.layers.10.self_attn.v_proj.weight": "model-00002-of-00009.safetensors",
35
+ "model.layers.11.input_layernorm.weight": "model-00002-of-00009.safetensors",
36
+ "model.layers.11.mlp.down_proj.weight": "model-00002-of-00009.safetensors",
37
+ "model.layers.11.mlp.gate_proj.weight": "model-00002-of-00009.safetensors",
38
+ "model.layers.11.mlp.up_proj.weight": "model-00002-of-00009.safetensors",
39
+ "model.layers.11.post_attention_layernorm.weight": "model-00002-of-00009.safetensors",
40
+ "model.layers.11.self_attn.k_proj.weight": "model-00002-of-00009.safetensors",
41
+ "model.layers.11.self_attn.o_proj.weight": "model-00002-of-00009.safetensors",
42
+ "model.layers.11.self_attn.q_proj.weight": "model-00002-of-00009.safetensors",
43
+ "model.layers.11.self_attn.v_proj.weight": "model-00002-of-00009.safetensors",
44
+ "model.layers.12.input_layernorm.weight": "model-00003-of-00009.safetensors",
45
+ "model.layers.12.mlp.down_proj.weight": "model-00003-of-00009.safetensors",
46
+ "model.layers.12.mlp.gate_proj.weight": "model-00003-of-00009.safetensors",
47
+ "model.layers.12.mlp.up_proj.weight": "model-00003-of-00009.safetensors",
48
+ "model.layers.12.post_attention_layernorm.weight": "model-00003-of-00009.safetensors",
49
+ "model.layers.12.self_attn.k_proj.weight": "model-00002-of-00009.safetensors",
50
+ "model.layers.12.self_attn.o_proj.weight": "model-00003-of-00009.safetensors",
51
+ "model.layers.12.self_attn.q_proj.weight": "model-00002-of-00009.safetensors",
52
+ "model.layers.12.self_attn.v_proj.weight": "model-00002-of-00009.safetensors",
53
+ "model.layers.13.input_layernorm.weight": "model-00003-of-00009.safetensors",
54
+ "model.layers.13.mlp.down_proj.weight": "model-00003-of-00009.safetensors",
55
+ "model.layers.13.mlp.gate_proj.weight": "model-00003-of-00009.safetensors",
56
+ "model.layers.13.mlp.up_proj.weight": "model-00003-of-00009.safetensors",
57
+ "model.layers.13.post_attention_layernorm.weight": "model-00003-of-00009.safetensors",
58
+ "model.layers.13.self_attn.k_proj.weight": "model-00003-of-00009.safetensors",
59
+ "model.layers.13.self_attn.o_proj.weight": "model-00003-of-00009.safetensors",
60
+ "model.layers.13.self_attn.q_proj.weight": "model-00003-of-00009.safetensors",
61
+ "model.layers.13.self_attn.v_proj.weight": "model-00003-of-00009.safetensors",
62
+ "model.layers.14.input_layernorm.weight": "model-00003-of-00009.safetensors",
63
+ "model.layers.14.mlp.down_proj.weight": "model-00003-of-00009.safetensors",
64
+ "model.layers.14.mlp.gate_proj.weight": "model-00003-of-00009.safetensors",
65
+ "model.layers.14.mlp.up_proj.weight": "model-00003-of-00009.safetensors",
66
+ "model.layers.14.post_attention_layernorm.weight": "model-00003-of-00009.safetensors",
67
+ "model.layers.14.self_attn.k_proj.weight": "model-00003-of-00009.safetensors",
68
+ "model.layers.14.self_attn.o_proj.weight": "model-00003-of-00009.safetensors",
69
+ "model.layers.14.self_attn.q_proj.weight": "model-00003-of-00009.safetensors",
70
+ "model.layers.14.self_attn.v_proj.weight": "model-00003-of-00009.safetensors",
71
+ "model.layers.15.input_layernorm.weight": "model-00003-of-00009.safetensors",
72
+ "model.layers.15.mlp.down_proj.weight": "model-00003-of-00009.safetensors",
73
+ "model.layers.15.mlp.gate_proj.weight": "model-00003-of-00009.safetensors",
74
+ "model.layers.15.mlp.up_proj.weight": "model-00003-of-00009.safetensors",
75
+ "model.layers.15.post_attention_layernorm.weight": "model-00003-of-00009.safetensors",
76
+ "model.layers.15.self_attn.k_proj.weight": "model-00003-of-00009.safetensors",
77
+ "model.layers.15.self_attn.o_proj.weight": "model-00003-of-00009.safetensors",
78
+ "model.layers.15.self_attn.q_proj.weight": "model-00003-of-00009.safetensors",
79
+ "model.layers.15.self_attn.v_proj.weight": "model-00003-of-00009.safetensors",
80
+ "model.layers.16.input_layernorm.weight": "model-00003-of-00009.safetensors",
81
+ "model.layers.16.mlp.down_proj.weight": "model-00003-of-00009.safetensors",
82
+ "model.layers.16.mlp.gate_proj.weight": "model-00003-of-00009.safetensors",
83
+ "model.layers.16.mlp.up_proj.weight": "model-00003-of-00009.safetensors",
84
+ "model.layers.16.post_attention_layernorm.weight": "model-00003-of-00009.safetensors",
85
+ "model.layers.16.self_attn.k_proj.weight": "model-00003-of-00009.safetensors",
86
+ "model.layers.16.self_attn.o_proj.weight": "model-00003-of-00009.safetensors",
87
+ "model.layers.16.self_attn.q_proj.weight": "model-00003-of-00009.safetensors",
88
+ "model.layers.16.self_attn.v_proj.weight": "model-00003-of-00009.safetensors",
89
+ "model.layers.17.input_layernorm.weight": "model-00003-of-00009.safetensors",
90
+ "model.layers.17.mlp.down_proj.weight": "model-00003-of-00009.safetensors",
91
+ "model.layers.17.mlp.gate_proj.weight": "model-00003-of-00009.safetensors",
92
+ "model.layers.17.mlp.up_proj.weight": "model-00003-of-00009.safetensors",
93
+ "model.layers.17.post_attention_layernorm.weight": "model-00003-of-00009.safetensors",
94
+ "model.layers.17.self_attn.k_proj.weight": "model-00003-of-00009.safetensors",
95
+ "model.layers.17.self_attn.o_proj.weight": "model-00003-of-00009.safetensors",
96
+ "model.layers.17.self_attn.q_proj.weight": "model-00003-of-00009.safetensors",
97
+ "model.layers.17.self_attn.v_proj.weight": "model-00003-of-00009.safetensors",
98
+ "model.layers.18.input_layernorm.weight": "model-00004-of-00009.safetensors",
99
+ "model.layers.18.mlp.down_proj.weight": "model-00004-of-00009.safetensors",
100
+ "model.layers.18.mlp.gate_proj.weight": "model-00003-of-00009.safetensors",
101
+ "model.layers.18.mlp.up_proj.weight": "model-00004-of-00009.safetensors",
102
+ "model.layers.18.post_attention_layernorm.weight": "model-00004-of-00009.safetensors",
103
+ "model.layers.18.self_attn.k_proj.weight": "model-00003-of-00009.safetensors",
104
+ "model.layers.18.self_attn.o_proj.weight": "model-00003-of-00009.safetensors",
105
+ "model.layers.18.self_attn.q_proj.weight": "model-00003-of-00009.safetensors",
106
+ "model.layers.18.self_attn.v_proj.weight": "model-00003-of-00009.safetensors",
107
+ "model.layers.19.input_layernorm.weight": "model-00004-of-00009.safetensors",
108
+ "model.layers.19.mlp.down_proj.weight": "model-00004-of-00009.safetensors",
109
+ "model.layers.19.mlp.gate_proj.weight": "model-00004-of-00009.safetensors",
110
+ "model.layers.19.mlp.up_proj.weight": "model-00004-of-00009.safetensors",
111
+ "model.layers.19.post_attention_layernorm.weight": "model-00004-of-00009.safetensors",
112
+ "model.layers.19.self_attn.k_proj.weight": "model-00004-of-00009.safetensors",
113
+ "model.layers.19.self_attn.o_proj.weight": "model-00004-of-00009.safetensors",
114
+ "model.layers.19.self_attn.q_proj.weight": "model-00004-of-00009.safetensors",
115
+ "model.layers.19.self_attn.v_proj.weight": "model-00004-of-00009.safetensors",
116
+ "model.layers.2.input_layernorm.weight": "model-00001-of-00009.safetensors",
117
+ "model.layers.2.mlp.down_proj.weight": "model-00001-of-00009.safetensors",
118
+ "model.layers.2.mlp.gate_proj.weight": "model-00001-of-00009.safetensors",
119
+ "model.layers.2.mlp.up_proj.weight": "model-00001-of-00009.safetensors",
120
+ "model.layers.2.post_attention_layernorm.weight": "model-00001-of-00009.safetensors",
121
+ "model.layers.2.self_attn.k_proj.weight": "model-00001-of-00009.safetensors",
122
+ "model.layers.2.self_attn.o_proj.weight": "model-00001-of-00009.safetensors",
123
+ "model.layers.2.self_attn.q_proj.weight": "model-00001-of-00009.safetensors",
124
+ "model.layers.2.self_attn.v_proj.weight": "model-00001-of-00009.safetensors",
125
+ "model.layers.20.input_layernorm.weight": "model-00004-of-00009.safetensors",
126
+ "model.layers.20.mlp.down_proj.weight": "model-00004-of-00009.safetensors",
127
+ "model.layers.20.mlp.gate_proj.weight": "model-00004-of-00009.safetensors",
128
+ "model.layers.20.mlp.up_proj.weight": "model-00004-of-00009.safetensors",
129
+ "model.layers.20.post_attention_layernorm.weight": "model-00004-of-00009.safetensors",
130
+ "model.layers.20.self_attn.k_proj.weight": "model-00004-of-00009.safetensors",
131
+ "model.layers.20.self_attn.o_proj.weight": "model-00004-of-00009.safetensors",
132
+ "model.layers.20.self_attn.q_proj.weight": "model-00004-of-00009.safetensors",
133
+ "model.layers.20.self_attn.v_proj.weight": "model-00004-of-00009.safetensors",
134
+ "model.layers.21.input_layernorm.weight": "model-00004-of-00009.safetensors",
135
+ "model.layers.21.mlp.down_proj.weight": "model-00004-of-00009.safetensors",
136
+ "model.layers.21.mlp.gate_proj.weight": "model-00004-of-00009.safetensors",
137
+ "model.layers.21.mlp.up_proj.weight": "model-00004-of-00009.safetensors",
138
+ "model.layers.21.post_attention_layernorm.weight": "model-00004-of-00009.safetensors",
139
+ "model.layers.21.self_attn.k_proj.weight": "model-00004-of-00009.safetensors",
140
+ "model.layers.21.self_attn.o_proj.weight": "model-00004-of-00009.safetensors",
141
+ "model.layers.21.self_attn.q_proj.weight": "model-00004-of-00009.safetensors",
142
+ "model.layers.21.self_attn.v_proj.weight": "model-00004-of-00009.safetensors",
143
+ "model.layers.22.input_layernorm.weight": "model-00004-of-00009.safetensors",
144
+ "model.layers.22.mlp.down_proj.weight": "model-00004-of-00009.safetensors",
145
+ "model.layers.22.mlp.gate_proj.weight": "model-00004-of-00009.safetensors",
146
+ "model.layers.22.mlp.up_proj.weight": "model-00004-of-00009.safetensors",
147
+ "model.layers.22.post_attention_layernorm.weight": "model-00004-of-00009.safetensors",
148
+ "model.layers.22.self_attn.k_proj.weight": "model-00004-of-00009.safetensors",
149
+ "model.layers.22.self_attn.o_proj.weight": "model-00004-of-00009.safetensors",
150
+ "model.layers.22.self_attn.q_proj.weight": "model-00004-of-00009.safetensors",
151
+ "model.layers.22.self_attn.v_proj.weight": "model-00004-of-00009.safetensors",
152
+ "model.layers.23.input_layernorm.weight": "model-00004-of-00009.safetensors",
153
+ "model.layers.23.mlp.down_proj.weight": "model-00004-of-00009.safetensors",
154
+ "model.layers.23.mlp.gate_proj.weight": "model-00004-of-00009.safetensors",
155
+ "model.layers.23.mlp.up_proj.weight": "model-00004-of-00009.safetensors",
156
+ "model.layers.23.post_attention_layernorm.weight": "model-00004-of-00009.safetensors",
157
+ "model.layers.23.self_attn.k_proj.weight": "model-00004-of-00009.safetensors",
158
+ "model.layers.23.self_attn.o_proj.weight": "model-00004-of-00009.safetensors",
159
+ "model.layers.23.self_attn.q_proj.weight": "model-00004-of-00009.safetensors",
160
+ "model.layers.23.self_attn.v_proj.weight": "model-00004-of-00009.safetensors",
161
+ "model.layers.24.input_layernorm.weight": "model-00005-of-00009.safetensors",
162
+ "model.layers.24.mlp.down_proj.weight": "model-00005-of-00009.safetensors",
163
+ "model.layers.24.mlp.gate_proj.weight": "model-00004-of-00009.safetensors",
164
+ "model.layers.24.mlp.up_proj.weight": "model-00004-of-00009.safetensors",
165
+ "model.layers.24.post_attention_layernorm.weight": "model-00005-of-00009.safetensors",
166
+ "model.layers.24.self_attn.k_proj.weight": "model-00004-of-00009.safetensors",
167
+ "model.layers.24.self_attn.o_proj.weight": "model-00004-of-00009.safetensors",
168
+ "model.layers.24.self_attn.q_proj.weight": "model-00004-of-00009.safetensors",
169
+ "model.layers.24.self_attn.v_proj.weight": "model-00004-of-00009.safetensors",
170
+ "model.layers.25.input_layernorm.weight": "model-00005-of-00009.safetensors",
171
+ "model.layers.25.mlp.down_proj.weight": "model-00005-of-00009.safetensors",
172
+ "model.layers.25.mlp.gate_proj.weight": "model-00005-of-00009.safetensors",
173
+ "model.layers.25.mlp.up_proj.weight": "model-00005-of-00009.safetensors",
174
+ "model.layers.25.post_attention_layernorm.weight": "model-00005-of-00009.safetensors",
175
+ "model.layers.25.self_attn.k_proj.weight": "model-00005-of-00009.safetensors",
176
+ "model.layers.25.self_attn.o_proj.weight": "model-00005-of-00009.safetensors",
177
+ "model.layers.25.self_attn.q_proj.weight": "model-00005-of-00009.safetensors",
178
+ "model.layers.25.self_attn.v_proj.weight": "model-00005-of-00009.safetensors",
179
+ "model.layers.26.input_layernorm.weight": "model-00005-of-00009.safetensors",
180
+ "model.layers.26.mlp.down_proj.weight": "model-00005-of-00009.safetensors",
181
+ "model.layers.26.mlp.gate_proj.weight": "model-00005-of-00009.safetensors",
182
+ "model.layers.26.mlp.up_proj.weight": "model-00005-of-00009.safetensors",
183
+ "model.layers.26.post_attention_layernorm.weight": "model-00005-of-00009.safetensors",
184
+ "model.layers.26.self_attn.k_proj.weight": "model-00005-of-00009.safetensors",
185
+ "model.layers.26.self_attn.o_proj.weight": "model-00005-of-00009.safetensors",
186
+ "model.layers.26.self_attn.q_proj.weight": "model-00005-of-00009.safetensors",
187
+ "model.layers.26.self_attn.v_proj.weight": "model-00005-of-00009.safetensors",
188
+ "model.layers.27.input_layernorm.weight": "model-00005-of-00009.safetensors",
189
+ "model.layers.27.mlp.down_proj.weight": "model-00005-of-00009.safetensors",
190
+ "model.layers.27.mlp.gate_proj.weight": "model-00005-of-00009.safetensors",
191
+ "model.layers.27.mlp.up_proj.weight": "model-00005-of-00009.safetensors",
192
+ "model.layers.27.post_attention_layernorm.weight": "model-00005-of-00009.safetensors",
193
+ "model.layers.27.self_attn.k_proj.weight": "model-00005-of-00009.safetensors",
194
+ "model.layers.27.self_attn.o_proj.weight": "model-00005-of-00009.safetensors",
195
+ "model.layers.27.self_attn.q_proj.weight": "model-00005-of-00009.safetensors",
196
+ "model.layers.27.self_attn.v_proj.weight": "model-00005-of-00009.safetensors",
197
+ "model.layers.28.input_layernorm.weight": "model-00005-of-00009.safetensors",
198
+ "model.layers.28.mlp.down_proj.weight": "model-00005-of-00009.safetensors",
199
+ "model.layers.28.mlp.gate_proj.weight": "model-00005-of-00009.safetensors",
200
+ "model.layers.28.mlp.up_proj.weight": "model-00005-of-00009.safetensors",
201
+ "model.layers.28.post_attention_layernorm.weight": "model-00005-of-00009.safetensors",
202
+ "model.layers.28.self_attn.k_proj.weight": "model-00005-of-00009.safetensors",
203
+ "model.layers.28.self_attn.o_proj.weight": "model-00005-of-00009.safetensors",
204
+ "model.layers.28.self_attn.q_proj.weight": "model-00005-of-00009.safetensors",
205
+ "model.layers.28.self_attn.v_proj.weight": "model-00005-of-00009.safetensors",
206
+ "model.layers.29.input_layernorm.weight": "model-00005-of-00009.safetensors",
207
+ "model.layers.29.mlp.down_proj.weight": "model-00005-of-00009.safetensors",
208
+ "model.layers.29.mlp.gate_proj.weight": "model-00005-of-00009.safetensors",
209
+ "model.layers.29.mlp.up_proj.weight": "model-00005-of-00009.safetensors",
210
+ "model.layers.29.post_attention_layernorm.weight": "model-00005-of-00009.safetensors",
211
+ "model.layers.29.self_attn.k_proj.weight": "model-00005-of-00009.safetensors",
212
+ "model.layers.29.self_attn.o_proj.weight": "model-00005-of-00009.safetensors",
213
+ "model.layers.29.self_attn.q_proj.weight": "model-00005-of-00009.safetensors",
214
+ "model.layers.29.self_attn.v_proj.weight": "model-00005-of-00009.safetensors",
215
+ "model.layers.3.input_layernorm.weight": "model-00001-of-00009.safetensors",
216
+ "model.layers.3.mlp.down_proj.weight": "model-00001-of-00009.safetensors",
217
+ "model.layers.3.mlp.gate_proj.weight": "model-00001-of-00009.safetensors",
218
+ "model.layers.3.mlp.up_proj.weight": "model-00001-of-00009.safetensors",
219
+ "model.layers.3.post_attention_layernorm.weight": "model-00001-of-00009.safetensors",
220
+ "model.layers.3.self_attn.k_proj.weight": "model-00001-of-00009.safetensors",
221
+ "model.layers.3.self_attn.o_proj.weight": "model-00001-of-00009.safetensors",
222
+ "model.layers.3.self_attn.q_proj.weight": "model-00001-of-00009.safetensors",
223
+ "model.layers.3.self_attn.v_proj.weight": "model-00001-of-00009.safetensors",
224
+ "model.layers.30.input_layernorm.weight": "model-00005-of-00009.safetensors",
225
+ "model.layers.30.mlp.down_proj.weight": "model-00005-of-00009.safetensors",
226
+ "model.layers.30.mlp.gate_proj.weight": "model-00005-of-00009.safetensors",
227
+ "model.layers.30.mlp.up_proj.weight": "model-00005-of-00009.safetensors",
228
+ "model.layers.30.post_attention_layernorm.weight": "model-00005-of-00009.safetensors",
229
+ "model.layers.30.self_attn.k_proj.weight": "model-00005-of-00009.safetensors",
230
+ "model.layers.30.self_attn.o_proj.weight": "model-00005-of-00009.safetensors",
231
+ "model.layers.30.self_attn.q_proj.weight": "model-00005-of-00009.safetensors",
232
+ "model.layers.30.self_attn.v_proj.weight": "model-00005-of-00009.safetensors",
233
+ "model.layers.31.input_layernorm.weight": "model-00006-of-00009.safetensors",
234
+ "model.layers.31.mlp.down_proj.weight": "model-00006-of-00009.safetensors",
235
+ "model.layers.31.mlp.gate_proj.weight": "model-00006-of-00009.safetensors",
236
+ "model.layers.31.mlp.up_proj.weight": "model-00006-of-00009.safetensors",
237
+ "model.layers.31.post_attention_layernorm.weight": "model-00006-of-00009.safetensors",
238
+ "model.layers.31.self_attn.k_proj.weight": "model-00005-of-00009.safetensors",
239
+ "model.layers.31.self_attn.o_proj.weight": "model-00006-of-00009.safetensors",
240
+ "model.layers.31.self_attn.q_proj.weight": "model-00005-of-00009.safetensors",
241
+ "model.layers.31.self_attn.v_proj.weight": "model-00005-of-00009.safetensors",
242
+ "model.layers.32.input_layernorm.weight": "model-00006-of-00009.safetensors",
243
+ "model.layers.32.mlp.down_proj.weight": "model-00006-of-00009.safetensors",
244
+ "model.layers.32.mlp.gate_proj.weight": "model-00006-of-00009.safetensors",
245
+ "model.layers.32.mlp.up_proj.weight": "model-00006-of-00009.safetensors",
246
+ "model.layers.32.post_attention_layernorm.weight": "model-00006-of-00009.safetensors",
247
+ "model.layers.32.self_attn.k_proj.weight": "model-00006-of-00009.safetensors",
248
+ "model.layers.32.self_attn.o_proj.weight": "model-00006-of-00009.safetensors",
249
+ "model.layers.32.self_attn.q_proj.weight": "model-00006-of-00009.safetensors",
250
+ "model.layers.32.self_attn.v_proj.weight": "model-00006-of-00009.safetensors",
251
+ "model.layers.33.input_layernorm.weight": "model-00006-of-00009.safetensors",
252
+ "model.layers.33.mlp.down_proj.weight": "model-00006-of-00009.safetensors",
253
+ "model.layers.33.mlp.gate_proj.weight": "model-00006-of-00009.safetensors",
254
+ "model.layers.33.mlp.up_proj.weight": "model-00006-of-00009.safetensors",
255
+ "model.layers.33.post_attention_layernorm.weight": "model-00006-of-00009.safetensors",
256
+ "model.layers.33.self_attn.k_proj.weight": "model-00006-of-00009.safetensors",
257
+ "model.layers.33.self_attn.o_proj.weight": "model-00006-of-00009.safetensors",
258
+ "model.layers.33.self_attn.q_proj.weight": "model-00006-of-00009.safetensors",
259
+ "model.layers.33.self_attn.v_proj.weight": "model-00006-of-00009.safetensors",
260
+ "model.layers.34.input_layernorm.weight": "model-00006-of-00009.safetensors",
261
+ "model.layers.34.mlp.down_proj.weight": "model-00006-of-00009.safetensors",
262
+ "model.layers.34.mlp.gate_proj.weight": "model-00006-of-00009.safetensors",
263
+ "model.layers.34.mlp.up_proj.weight": "model-00006-of-00009.safetensors",
264
+ "model.layers.34.post_attention_layernorm.weight": "model-00006-of-00009.safetensors",
265
+ "model.layers.34.self_attn.k_proj.weight": "model-00006-of-00009.safetensors",
266
+ "model.layers.34.self_attn.o_proj.weight": "model-00006-of-00009.safetensors",
267
+ "model.layers.34.self_attn.q_proj.weight": "model-00006-of-00009.safetensors",
268
+ "model.layers.34.self_attn.v_proj.weight": "model-00006-of-00009.safetensors",
269
+ "model.layers.35.input_layernorm.weight": "model-00006-of-00009.safetensors",
270
+ "model.layers.35.mlp.down_proj.weight": "model-00006-of-00009.safetensors",
271
+ "model.layers.35.mlp.gate_proj.weight": "model-00006-of-00009.safetensors",
272
+ "model.layers.35.mlp.up_proj.weight": "model-00006-of-00009.safetensors",
273
+ "model.layers.35.post_attention_layernorm.weight": "model-00006-of-00009.safetensors",
274
+ "model.layers.35.self_attn.k_proj.weight": "model-00006-of-00009.safetensors",
275
+ "model.layers.35.self_attn.o_proj.weight": "model-00006-of-00009.safetensors",
276
+ "model.layers.35.self_attn.q_proj.weight": "model-00006-of-00009.safetensors",
277
+ "model.layers.35.self_attn.v_proj.weight": "model-00006-of-00009.safetensors",
278
+ "model.layers.36.input_layernorm.weight": "model-00006-of-00009.safetensors",
279
+ "model.layers.36.mlp.down_proj.weight": "model-00006-of-00009.safetensors",
280
+ "model.layers.36.mlp.gate_proj.weight": "model-00006-of-00009.safetensors",
281
+ "model.layers.36.mlp.up_proj.weight": "model-00006-of-00009.safetensors",
282
+ "model.layers.36.post_attention_layernorm.weight": "model-00006-of-00009.safetensors",
283
+ "model.layers.36.self_attn.k_proj.weight": "model-00006-of-00009.safetensors",
284
+ "model.layers.36.self_attn.o_proj.weight": "model-00006-of-00009.safetensors",
285
+ "model.layers.36.self_attn.q_proj.weight": "model-00006-of-00009.safetensors",
286
+ "model.layers.36.self_attn.v_proj.weight": "model-00006-of-00009.safetensors",
287
+ "model.layers.37.input_layernorm.weight": "model-00007-of-00009.safetensors",
288
+ "model.layers.37.mlp.down_proj.weight": "model-00007-of-00009.safetensors",
289
+ "model.layers.37.mlp.gate_proj.weight": "model-00006-of-00009.safetensors",
290
+ "model.layers.37.mlp.up_proj.weight": "model-00007-of-00009.safetensors",
291
+ "model.layers.37.post_attention_layernorm.weight": "model-00007-of-00009.safetensors",
292
+ "model.layers.37.self_attn.k_proj.weight": "model-00006-of-00009.safetensors",
293
+ "model.layers.37.self_attn.o_proj.weight": "model-00006-of-00009.safetensors",
294
+ "model.layers.37.self_attn.q_proj.weight": "model-00006-of-00009.safetensors",
295
+ "model.layers.37.self_attn.v_proj.weight": "model-00006-of-00009.safetensors",
296
+ "model.layers.38.input_layernorm.weight": "model-00007-of-00009.safetensors",
297
+ "model.layers.38.mlp.down_proj.weight": "model-00007-of-00009.safetensors",
298
+ "model.layers.38.mlp.gate_proj.weight": "model-00007-of-00009.safetensors",
299
+ "model.layers.38.mlp.up_proj.weight": "model-00007-of-00009.safetensors",
300
+ "model.layers.38.post_attention_layernorm.weight": "model-00007-of-00009.safetensors",
301
+ "model.layers.38.self_attn.k_proj.weight": "model-00007-of-00009.safetensors",
302
+ "model.layers.38.self_attn.o_proj.weight": "model-00007-of-00009.safetensors",
303
+ "model.layers.38.self_attn.q_proj.weight": "model-00007-of-00009.safetensors",
304
+ "model.layers.38.self_attn.v_proj.weight": "model-00007-of-00009.safetensors",
305
+ "model.layers.39.input_layernorm.weight": "model-00007-of-00009.safetensors",
306
+ "model.layers.39.mlp.down_proj.weight": "model-00007-of-00009.safetensors",
307
+ "model.layers.39.mlp.gate_proj.weight": "model-00007-of-00009.safetensors",
308
+ "model.layers.39.mlp.up_proj.weight": "model-00007-of-00009.safetensors",
309
+ "model.layers.39.post_attention_layernorm.weight": "model-00007-of-00009.safetensors",
310
+ "model.layers.39.self_attn.k_proj.weight": "model-00007-of-00009.safetensors",
311
+ "model.layers.39.self_attn.o_proj.weight": "model-00007-of-00009.safetensors",
312
+ "model.layers.39.self_attn.q_proj.weight": "model-00007-of-00009.safetensors",
313
+ "model.layers.39.self_attn.v_proj.weight": "model-00007-of-00009.safetensors",
314
+ "model.layers.4.input_layernorm.weight": "model-00001-of-00009.safetensors",
315
+ "model.layers.4.mlp.down_proj.weight": "model-00001-of-00009.safetensors",
316
+ "model.layers.4.mlp.gate_proj.weight": "model-00001-of-00009.safetensors",
317
+ "model.layers.4.mlp.up_proj.weight": "model-00001-of-00009.safetensors",
318
+ "model.layers.4.post_attention_layernorm.weight": "model-00001-of-00009.safetensors",
319
+ "model.layers.4.self_attn.k_proj.weight": "model-00001-of-00009.safetensors",
320
+ "model.layers.4.self_attn.o_proj.weight": "model-00001-of-00009.safetensors",
321
+ "model.layers.4.self_attn.q_proj.weight": "model-00001-of-00009.safetensors",
322
+ "model.layers.4.self_attn.v_proj.weight": "model-00001-of-00009.safetensors",
323
+ "model.layers.40.input_layernorm.weight": "model-00007-of-00009.safetensors",
324
+ "model.layers.40.mlp.down_proj.weight": "model-00007-of-00009.safetensors",
325
+ "model.layers.40.mlp.gate_proj.weight": "model-00007-of-00009.safetensors",
326
+ "model.layers.40.mlp.up_proj.weight": "model-00007-of-00009.safetensors",
327
+ "model.layers.40.post_attention_layernorm.weight": "model-00007-of-00009.safetensors",
328
+ "model.layers.40.self_attn.k_proj.weight": "model-00007-of-00009.safetensors",
329
+ "model.layers.40.self_attn.o_proj.weight": "model-00007-of-00009.safetensors",
330
+ "model.layers.40.self_attn.q_proj.weight": "model-00007-of-00009.safetensors",
331
+ "model.layers.40.self_attn.v_proj.weight": "model-00007-of-00009.safetensors",
332
+ "model.layers.41.input_layernorm.weight": "model-00007-of-00009.safetensors",
333
+ "model.layers.41.mlp.down_proj.weight": "model-00007-of-00009.safetensors",
334
+ "model.layers.41.mlp.gate_proj.weight": "model-00007-of-00009.safetensors",
335
+ "model.layers.41.mlp.up_proj.weight": "model-00007-of-00009.safetensors",
336
+ "model.layers.41.post_attention_layernorm.weight": "model-00007-of-00009.safetensors",
337
+ "model.layers.41.self_attn.k_proj.weight": "model-00007-of-00009.safetensors",
338
+ "model.layers.41.self_attn.o_proj.weight": "model-00007-of-00009.safetensors",
339
+ "model.layers.41.self_attn.q_proj.weight": "model-00007-of-00009.safetensors",
340
+ "model.layers.41.self_attn.v_proj.weight": "model-00007-of-00009.safetensors",
341
+ "model.layers.42.input_layernorm.weight": "model-00007-of-00009.safetensors",
342
+ "model.layers.42.mlp.down_proj.weight": "model-00007-of-00009.safetensors",
343
+ "model.layers.42.mlp.gate_proj.weight": "model-00007-of-00009.safetensors",
344
+ "model.layers.42.mlp.up_proj.weight": "model-00007-of-00009.safetensors",
345
+ "model.layers.42.post_attention_layernorm.weight": "model-00007-of-00009.safetensors",
346
+ "model.layers.42.self_attn.k_proj.weight": "model-00007-of-00009.safetensors",
347
+ "model.layers.42.self_attn.o_proj.weight": "model-00007-of-00009.safetensors",
348
+ "model.layers.42.self_attn.q_proj.weight": "model-00007-of-00009.safetensors",
349
+ "model.layers.42.self_attn.v_proj.weight": "model-00007-of-00009.safetensors",
350
+ "model.layers.43.input_layernorm.weight": "model-00008-of-00009.safetensors",
351
+ "model.layers.43.mlp.down_proj.weight": "model-00008-of-00009.safetensors",
352
+ "model.layers.43.mlp.gate_proj.weight": "model-00007-of-00009.safetensors",
353
+ "model.layers.43.mlp.up_proj.weight": "model-00007-of-00009.safetensors",
354
+ "model.layers.43.post_attention_layernorm.weight": "model-00008-of-00009.safetensors",
355
+ "model.layers.43.self_attn.k_proj.weight": "model-00007-of-00009.safetensors",
356
+ "model.layers.43.self_attn.o_proj.weight": "model-00007-of-00009.safetensors",
357
+ "model.layers.43.self_attn.q_proj.weight": "model-00007-of-00009.safetensors",
358
+ "model.layers.43.self_attn.v_proj.weight": "model-00007-of-00009.safetensors",
359
+ "model.layers.44.input_layernorm.weight": "model-00008-of-00009.safetensors",
360
+ "model.layers.44.mlp.down_proj.weight": "model-00008-of-00009.safetensors",
361
+ "model.layers.44.mlp.gate_proj.weight": "model-00008-of-00009.safetensors",
362
+ "model.layers.44.mlp.up_proj.weight": "model-00008-of-00009.safetensors",
363
+ "model.layers.44.post_attention_layernorm.weight": "model-00008-of-00009.safetensors",
364
+ "model.layers.44.self_attn.k_proj.weight": "model-00008-of-00009.safetensors",
365
+ "model.layers.44.self_attn.o_proj.weight": "model-00008-of-00009.safetensors",
366
+ "model.layers.44.self_attn.q_proj.weight": "model-00008-of-00009.safetensors",
367
+ "model.layers.44.self_attn.v_proj.weight": "model-00008-of-00009.safetensors",
368
+ "model.layers.45.input_layernorm.weight": "model-00008-of-00009.safetensors",
369
+ "model.layers.45.mlp.down_proj.weight": "model-00008-of-00009.safetensors",
370
+ "model.layers.45.mlp.gate_proj.weight": "model-00008-of-00009.safetensors",
371
+ "model.layers.45.mlp.up_proj.weight": "model-00008-of-00009.safetensors",
372
+ "model.layers.45.post_attention_layernorm.weight": "model-00008-of-00009.safetensors",
373
+ "model.layers.45.self_attn.k_proj.weight": "model-00008-of-00009.safetensors",
374
+ "model.layers.45.self_attn.o_proj.weight": "model-00008-of-00009.safetensors",
375
+ "model.layers.45.self_attn.q_proj.weight": "model-00008-of-00009.safetensors",
376
+ "model.layers.45.self_attn.v_proj.weight": "model-00008-of-00009.safetensors",
377
+ "model.layers.46.input_layernorm.weight": "model-00008-of-00009.safetensors",
378
+ "model.layers.46.mlp.down_proj.weight": "model-00008-of-00009.safetensors",
379
+ "model.layers.46.mlp.gate_proj.weight": "model-00008-of-00009.safetensors",
380
+ "model.layers.46.mlp.up_proj.weight": "model-00008-of-00009.safetensors",
381
+ "model.layers.46.post_attention_layernorm.weight": "model-00008-of-00009.safetensors",
382
+ "model.layers.46.self_attn.k_proj.weight": "model-00008-of-00009.safetensors",
383
+ "model.layers.46.self_attn.o_proj.weight": "model-00008-of-00009.safetensors",
384
+ "model.layers.46.self_attn.q_proj.weight": "model-00008-of-00009.safetensors",
385
+ "model.layers.46.self_attn.v_proj.weight": "model-00008-of-00009.safetensors",
386
+ "model.layers.47.input_layernorm.weight": "model-00008-of-00009.safetensors",
387
+ "model.layers.47.mlp.down_proj.weight": "model-00008-of-00009.safetensors",
388
+ "model.layers.47.mlp.gate_proj.weight": "model-00008-of-00009.safetensors",
389
+ "model.layers.47.mlp.up_proj.weight": "model-00008-of-00009.safetensors",
390
+ "model.layers.47.post_attention_layernorm.weight": "model-00008-of-00009.safetensors",
391
+ "model.layers.47.self_attn.k_proj.weight": "model-00008-of-00009.safetensors",
392
+ "model.layers.47.self_attn.o_proj.weight": "model-00008-of-00009.safetensors",
393
+ "model.layers.47.self_attn.q_proj.weight": "model-00008-of-00009.safetensors",
394
+ "model.layers.47.self_attn.v_proj.weight": "model-00008-of-00009.safetensors",
395
+ "model.layers.48.input_layernorm.weight": "model-00008-of-00009.safetensors",
396
+ "model.layers.48.mlp.down_proj.weight": "model-00008-of-00009.safetensors",
397
+ "model.layers.48.mlp.gate_proj.weight": "model-00008-of-00009.safetensors",
398
+ "model.layers.48.mlp.up_proj.weight": "model-00008-of-00009.safetensors",
399
+ "model.layers.48.post_attention_layernorm.weight": "model-00008-of-00009.safetensors",
400
+ "model.layers.48.self_attn.k_proj.weight": "model-00008-of-00009.safetensors",
401
+ "model.layers.48.self_attn.o_proj.weight": "model-00008-of-00009.safetensors",
402
+ "model.layers.48.self_attn.q_proj.weight": "model-00008-of-00009.safetensors",
403
+ "model.layers.48.self_attn.v_proj.weight": "model-00008-of-00009.safetensors",
404
+ "model.layers.49.input_layernorm.weight": "model-00008-of-00009.safetensors",
405
+ "model.layers.49.mlp.down_proj.weight": "model-00008-of-00009.safetensors",
406
+ "model.layers.49.mlp.gate_proj.weight": "model-00008-of-00009.safetensors",
407
+ "model.layers.49.mlp.up_proj.weight": "model-00008-of-00009.safetensors",
408
+ "model.layers.49.post_attention_layernorm.weight": "model-00008-of-00009.safetensors",
409
+ "model.layers.49.self_attn.k_proj.weight": "model-00008-of-00009.safetensors",
410
+ "model.layers.49.self_attn.o_proj.weight": "model-00008-of-00009.safetensors",
411
+ "model.layers.49.self_attn.q_proj.weight": "model-00008-of-00009.safetensors",
412
+ "model.layers.49.self_attn.v_proj.weight": "model-00008-of-00009.safetensors",
413
+ "model.layers.5.input_layernorm.weight": "model-00002-of-00009.safetensors",
414
+ "model.layers.5.mlp.down_proj.weight": "model-00002-of-00009.safetensors",
415
+ "model.layers.5.mlp.gate_proj.weight": "model-00001-of-00009.safetensors",
416
+ "model.layers.5.mlp.up_proj.weight": "model-00001-of-00009.safetensors",
417
+ "model.layers.5.post_attention_layernorm.weight": "model-00002-of-00009.safetensors",
418
+ "model.layers.5.self_attn.k_proj.weight": "model-00001-of-00009.safetensors",
419
+ "model.layers.5.self_attn.o_proj.weight": "model-00001-of-00009.safetensors",
420
+ "model.layers.5.self_attn.q_proj.weight": "model-00001-of-00009.safetensors",
421
+ "model.layers.5.self_attn.v_proj.weight": "model-00001-of-00009.safetensors",
422
+ "model.layers.50.input_layernorm.weight": "model-00009-of-00009.safetensors",
423
+ "model.layers.50.mlp.down_proj.weight": "model-00009-of-00009.safetensors",
424
+ "model.layers.50.mlp.gate_proj.weight": "model-00009-of-00009.safetensors",
425
+ "model.layers.50.mlp.up_proj.weight": "model-00009-of-00009.safetensors",
426
+ "model.layers.50.post_attention_layernorm.weight": "model-00009-of-00009.safetensors",
427
+ "model.layers.50.self_attn.k_proj.weight": "model-00008-of-00009.safetensors",
428
+ "model.layers.50.self_attn.o_proj.weight": "model-00009-of-00009.safetensors",
429
+ "model.layers.50.self_attn.q_proj.weight": "model-00008-of-00009.safetensors",
430
+ "model.layers.50.self_attn.v_proj.weight": "model-00008-of-00009.safetensors",
431
+ "model.layers.51.input_layernorm.weight": "model-00009-of-00009.safetensors",
432
+ "model.layers.51.mlp.down_proj.weight": "model-00009-of-00009.safetensors",
433
+ "model.layers.51.mlp.gate_proj.weight": "model-00009-of-00009.safetensors",
434
+ "model.layers.51.mlp.up_proj.weight": "model-00009-of-00009.safetensors",
435
+ "model.layers.51.post_attention_layernorm.weight": "model-00009-of-00009.safetensors",
436
+ "model.layers.51.self_attn.k_proj.weight": "model-00009-of-00009.safetensors",
437
+ "model.layers.51.self_attn.o_proj.weight": "model-00009-of-00009.safetensors",
438
+ "model.layers.51.self_attn.q_proj.weight": "model-00009-of-00009.safetensors",
439
+ "model.layers.51.self_attn.v_proj.weight": "model-00009-of-00009.safetensors",
440
+ "model.layers.52.input_layernorm.weight": "model-00009-of-00009.safetensors",
441
+ "model.layers.52.mlp.down_proj.weight": "model-00009-of-00009.safetensors",
442
+ "model.layers.52.mlp.gate_proj.weight": "model-00009-of-00009.safetensors",
443
+ "model.layers.52.mlp.up_proj.weight": "model-00009-of-00009.safetensors",
444
+ "model.layers.52.post_attention_layernorm.weight": "model-00009-of-00009.safetensors",
445
+ "model.layers.52.self_attn.k_proj.weight": "model-00009-of-00009.safetensors",
446
+ "model.layers.52.self_attn.o_proj.weight": "model-00009-of-00009.safetensors",
447
+ "model.layers.52.self_attn.q_proj.weight": "model-00009-of-00009.safetensors",
448
+ "model.layers.52.self_attn.v_proj.weight": "model-00009-of-00009.safetensors",
449
+ "model.layers.53.input_layernorm.weight": "model-00009-of-00009.safetensors",
450
+ "model.layers.53.mlp.down_proj.weight": "model-00009-of-00009.safetensors",
451
+ "model.layers.53.mlp.gate_proj.weight": "model-00009-of-00009.safetensors",
452
+ "model.layers.53.mlp.up_proj.weight": "model-00009-of-00009.safetensors",
453
+ "model.layers.53.post_attention_layernorm.weight": "model-00009-of-00009.safetensors",
454
+ "model.layers.53.self_attn.k_proj.weight": "model-00009-of-00009.safetensors",
455
+ "model.layers.53.self_attn.o_proj.weight": "model-00009-of-00009.safetensors",
456
+ "model.layers.53.self_attn.q_proj.weight": "model-00009-of-00009.safetensors",
457
+ "model.layers.53.self_attn.v_proj.weight": "model-00009-of-00009.safetensors",
458
+ "model.layers.54.input_layernorm.weight": "model-00009-of-00009.safetensors",
459
+ "model.layers.54.mlp.down_proj.weight": "model-00009-of-00009.safetensors",
460
+ "model.layers.54.mlp.gate_proj.weight": "model-00009-of-00009.safetensors",
461
+ "model.layers.54.mlp.up_proj.weight": "model-00009-of-00009.safetensors",
462
+ "model.layers.54.post_attention_layernorm.weight": "model-00009-of-00009.safetensors",
463
+ "model.layers.54.self_attn.k_proj.weight": "model-00009-of-00009.safetensors",
464
+ "model.layers.54.self_attn.o_proj.weight": "model-00009-of-00009.safetensors",
465
+ "model.layers.54.self_attn.q_proj.weight": "model-00009-of-00009.safetensors",
466
+ "model.layers.54.self_attn.v_proj.weight": "model-00009-of-00009.safetensors",
467
+ "model.layers.55.input_layernorm.weight": "model-00009-of-00009.safetensors",
468
+ "model.layers.55.mlp.down_proj.weight": "model-00009-of-00009.safetensors",
469
+ "model.layers.55.mlp.gate_proj.weight": "model-00009-of-00009.safetensors",
470
+ "model.layers.55.mlp.up_proj.weight": "model-00009-of-00009.safetensors",
471
+ "model.layers.55.post_attention_layernorm.weight": "model-00009-of-00009.safetensors",
472
+ "model.layers.55.self_attn.k_proj.weight": "model-00009-of-00009.safetensors",
473
+ "model.layers.55.self_attn.o_proj.weight": "model-00009-of-00009.safetensors",
474
+ "model.layers.55.self_attn.q_proj.weight": "model-00009-of-00009.safetensors",
475
+ "model.layers.55.self_attn.v_proj.weight": "model-00009-of-00009.safetensors",
476
+ "model.layers.6.input_layernorm.weight": "model-00002-of-00009.safetensors",
477
+ "model.layers.6.mlp.down_proj.weight": "model-00002-of-00009.safetensors",
478
+ "model.layers.6.mlp.gate_proj.weight": "model-00002-of-00009.safetensors",
479
+ "model.layers.6.mlp.up_proj.weight": "model-00002-of-00009.safetensors",
480
+ "model.layers.6.post_attention_layernorm.weight": "model-00002-of-00009.safetensors",
481
+ "model.layers.6.self_attn.k_proj.weight": "model-00002-of-00009.safetensors",
482
+ "model.layers.6.self_attn.o_proj.weight": "model-00002-of-00009.safetensors",
483
+ "model.layers.6.self_attn.q_proj.weight": "model-00002-of-00009.safetensors",
484
+ "model.layers.6.self_attn.v_proj.weight": "model-00002-of-00009.safetensors",
485
+ "model.layers.7.input_layernorm.weight": "model-00002-of-00009.safetensors",
486
+ "model.layers.7.mlp.down_proj.weight": "model-00002-of-00009.safetensors",
487
+ "model.layers.7.mlp.gate_proj.weight": "model-00002-of-00009.safetensors",
488
+ "model.layers.7.mlp.up_proj.weight": "model-00002-of-00009.safetensors",
489
+ "model.layers.7.post_attention_layernorm.weight": "model-00002-of-00009.safetensors",
490
+ "model.layers.7.self_attn.k_proj.weight": "model-00002-of-00009.safetensors",
491
+ "model.layers.7.self_attn.o_proj.weight": "model-00002-of-00009.safetensors",
492
+ "model.layers.7.self_attn.q_proj.weight": "model-00002-of-00009.safetensors",
493
+ "model.layers.7.self_attn.v_proj.weight": "model-00002-of-00009.safetensors",
494
+ "model.layers.8.input_layernorm.weight": "model-00002-of-00009.safetensors",
495
+ "model.layers.8.mlp.down_proj.weight": "model-00002-of-00009.safetensors",
496
+ "model.layers.8.mlp.gate_proj.weight": "model-00002-of-00009.safetensors",
497
+ "model.layers.8.mlp.up_proj.weight": "model-00002-of-00009.safetensors",
498
+ "model.layers.8.post_attention_layernorm.weight": "model-00002-of-00009.safetensors",
499
+ "model.layers.8.self_attn.k_proj.weight": "model-00002-of-00009.safetensors",
500
+ "model.layers.8.self_attn.o_proj.weight": "model-00002-of-00009.safetensors",
501
+ "model.layers.8.self_attn.q_proj.weight": "model-00002-of-00009.safetensors",
502
+ "model.layers.8.self_attn.v_proj.weight": "model-00002-of-00009.safetensors",
503
+ "model.layers.9.input_layernorm.weight": "model-00002-of-00009.safetensors",
504
+ "model.layers.9.mlp.down_proj.weight": "model-00002-of-00009.safetensors",
505
+ "model.layers.9.mlp.gate_proj.weight": "model-00002-of-00009.safetensors",
506
+ "model.layers.9.mlp.up_proj.weight": "model-00002-of-00009.safetensors",
507
+ "model.layers.9.post_attention_layernorm.weight": "model-00002-of-00009.safetensors",
508
+ "model.layers.9.self_attn.k_proj.weight": "model-00002-of-00009.safetensors",
509
+ "model.layers.9.self_attn.o_proj.weight": "model-00002-of-00009.safetensors",
510
+ "model.layers.9.self_attn.q_proj.weight": "model-00002-of-00009.safetensors",
511
+ "model.layers.9.self_attn.v_proj.weight": "model-00002-of-00009.safetensors",
512
+ "model.norm.weight": "model-00009-of-00009.safetensors"
513
+ }
514
+ }
output-00001-of-00002.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7e702515999cbbeae84e488f6975cfe037b8c7ff49941802b6c722ee5cef4d1a
3
+ size 8561962832
output-00002-of-00002.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:843f7c2cccac18d719f1bd358cbf44ae405aca75d6e37a90d7b47d339b6b3fb8
3
+ size 2909099072
special_tokens_map.json ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "eos_token": {
10
+ "content": "</s>",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "unk_token": {
17
+ "content": "<unk>",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ }
23
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dadfd56d766715c61d2ef780a525ab43b8e6da4de6865bda3d95fdef5e134055
3
+ size 493443
tokenizer_config.json ADDED
@@ -0,0 +1,42 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": true,
3
+ "add_eos_token": false,
4
+ "added_tokens_decoder": {
5
+ "0": {
6
+ "content": "<unk>",
7
+ "lstrip": false,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false,
11
+ "special": true
12
+ },
13
+ "1": {
14
+ "content": "<s>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false,
19
+ "special": true
20
+ },
21
+ "2": {
22
+ "content": "</s>",
23
+ "lstrip": false,
24
+ "normalized": false,
25
+ "rstrip": false,
26
+ "single_word": false,
27
+ "special": true
28
+ }
29
+ },
30
+ "additional_special_tokens": [],
31
+ "bos_token": "<s>",
32
+ "clean_up_tokenization_spaces": false,
33
+ "eos_token": "</s>",
34
+ "legacy": true,
35
+ "model_max_length": 1000000000000000019884624838656,
36
+ "pad_token": null,
37
+ "sp_model_kwargs": {},
38
+ "spaces_between_special_tokens": false,
39
+ "tokenizer_class": "LlamaTokenizer",
40
+ "unk_token": "<unk>",
41
+ "use_default_system_prompt": false
42
+ }