Taishi-N324 commited on
Commit
de59d7a
·
verified ·
1 Parent(s): 732770b

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +160 -0
README.md ADDED
@@ -0,0 +1,160 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ - ja
5
+ library_name: transformers
6
+ pipeline_tag: text-generation
7
+ model_type: mistral
8
+ license: apache-2.0
9
+ ---
10
+
11
+ # Swallow-MS-7b-v0.1
12
+
13
+ Our Swallow-MS-7b-v0.1 model has undergone continuous pre-training from the Mistral-7B-v0.1, primarily with the addition of Japanese language data. **The instruction tuning version will be released soon.**
14
+
15
+ ![logo](./logo.png)
16
+
17
+ ## Model Details
18
+
19
+ * **Model type**: Please refer to Mistral technical report for details on the model architecture.
20
+ * **Language(s)**: Japanese English
21
+ * **Tokenizer**: This model employs a tokenizer that features a broadened vocabulary based on Japanese data. This allows for a more efficient representation of text using fewer tokens, leading to a notably faster inference process.
22
+ * **Contact**: swallow[at]nlp.c.titech.ac.jp
23
+
24
+ ## Base Model Performance
25
+
26
+ ### Japanese version
27
+ |Model|Size|JCommonsenseQA|JEMHopQA|NIILC|JSQuAD|XL-Sum|MGSM|WMT20-en-ja|WMT20-ja-en|Average|
28
+ |---------------------------|-------|---------|-------|-------|-------|------|------------|------------|------|-----|
29
+ | | |4-shot|4-shot|4-shot|4-shot|1-shot|4-shot|4-shot|4-shot||
30
+ | CyberAgentLM2-7B |7B| 0.2198 | 0.5047 | 0.5066 | 0.7799 | 0.0233 | 0.0600 | 0.2345 | 0.1499 | 0.3098 |
31
+ | Llama 2 |7B| 0.3852 | 0.4240 | 0.3410 | 0.7917 | 0.1905 | 0.0760 | 0.1783 | 0.1738 | 0.3201 |
32
+ | japanese-stablelm-base-beta-7b|7B| 0.3610 | 0.4478 | 0.4432 | 0.8318 | 0.2195 | 0.0720 | 0.1946 | 0.1226 | 0.3366 |
33
+ | japanese-stablelm-base-ja_vocab-beta-7b|7B| 0.2172 | 0.4482 | 0.4309 | 0.8202 | 0.0757 | 0.0520 | 0.1601 | 0.1453 | 0.2937 |
34
+ | ELYZA-japanese-Llama-2-7b|7B| 0.5791 | 0.4703 | 0.4019 | 0.8226 | 0.1312 | 0.0600 | 0.1795 | 0.1289 | 0.3467 |
35
+ | ELYZA-japanese-Llama-2-7b-fast|7B| 0.5308 | 0.4330 | 0.3898 | 0.8131 | 0.1289 | 0.0720 | 0.1678 | 0.1143 | 0.3312 |
36
+ | youri-7b (base) |7B| 0.4620 | 0.4776 | 0.4999 | 0.8506 | 0.1957 | 0.0640 | 0.2671 | **0.1971** | 0.3768 |
37
+ | Swallow-7b |7B| 0.4808 | 0.5078 | 0.5968 | 0.8573 | 0.1830 | 0.1240 | 0.2510 | 0.1511 | 0.3940 |
38
+ | Swallow-7b-plus |7B| 0.5478 | **0.5493** | **0.6030** | 0.8544 | 0.1806 | 0.1360 | 0.2568 | 0.1441 | 0.4090 |
39
+ | Qwen-7B |7B| 0.7712 | 0.4234 | 0.2376 | 0.8594 | 0.1371 | 0.2160 | 0.1689 | 0.1801 | 0.3742 |
40
+ | nekomata-7b |7B| 0.7417 | 0.4928 | 0.5022 | 0.8707 | 0.1676 | 0.1240 | **0.2673** | 0.1815 | 0.4185 |
41
+ | Mistral-7B-v0.1 |7B| 0.7301 | 0.4245 | 0.2722 | 0.8563 | 0.2006 | 0.1760 | 0.1405 | 0.1733 | 0.3717 |
42
+ | japanese-stablelm-base-gamma-7b|7B| 0.7364 | 0.4643 | 0.5568 | **0.8910** | **0.2293** | 0.1680 | 0.2390 | 0.1561 | 0.4301 |
43
+ | Swallow-MS-7b-v0.1 |7B| **0.8570** | 0.4915 | 0.5519 | 0.8802 | 0.1988 | **0.2240** | 0.2494 | 0.1667 | **0.4524** |
44
+
45
+
46
+ ### English version
47
+
48
+ |Model|Size|OpenBookQA|TriviaQA|HellaSwag|SQuAD2.0|XWINO|GSM8K|Average|
49
+ |---|---|---|---|---|---|---|---|---|
50
+ | | |8-shot|8-shot|8-shot|8-shot|8-shot|8-shot||
51
+ | CyberAgentLM2-7B |7B| 0.2860 | 0.3496 | 0.5003 | 0.3510 | 0.8581 | 0.0705 | 0.4026 |
52
+ | Llama 2 |7B| 0.3580 | 0.6265 | 0.5860 | 0.3207 | 0.9049 | 0.1410 | 0.4895 |
53
+ | japanese-stablelm-base-beta-7b|7B| 0.3620 | 0.5903 | 0.5707 | 0.2992 | 0.8994 | 0.1198 | 0.4736 |
54
+ | japanese-stablelm-base-ja_vocab-beta-7b|7B| 0.3520 | 0.5549 | 0.5644 | 0.3079 | 0.8942 | 0.0538 | 0.4545 |
55
+ | ELYZA-japanese-Llama-2-7b|7B| 0.3400 | 0.5875 | 0.5595 | 0.2721 | 0.8989 | 0.1638 | 0.4703 |
56
+ | ELYZA-japanese-Llama-2-7b-fast|7B| 0.3280 | 0.5817 | 0.5530 | 0.2605 | 0.8989 | 0.1425 | 0.4608 |
57
+ | youri-7b (base) |7B| 0.3400 | 0.5257 | 0.5540 | 0.3297 | 0.8938 | 0.0963 | 0.4566 |
58
+ | Swallow-7b |7B| 0.3180 | 0.4836 | 0.5308 | 0.3125 | 0.8817 | 0.1130 | 0.4399 |
59
+ | Swallow-7b-plus |7B| 0.3280 | 0.4558 | 0.5259 | 0.3134 | 0.8929 | 0.1061 | 0.4370 |
60
+ | Qwen-7B |7B| 0.3640 | 0.5695 | 0.5787 | **0.3799** | 0.8933 | **0.4617** | 0.5412 |
61
+ | nekomata-7b |7B| 0.3340 | 0.4371 | 0.5340 | 0.2933 | 0.8766 | 0.1531 | 0.4380 |
62
+ | Mistral-7B-v0.1 |7B| **0.3660** | **0.7050** | **0.6264** | **0.3799** | **0.9157** | 0.3533 | **0.5577** |
63
+ | japanese-stablelm-base-gamma-7b|7B| 0.3240 | 0.5745 | 0.5739 | 0.3546 | 0.8976 | 0.1911 | 0.4860 |
64
+ | Swallow-MS-7b-v0.1 |7B| 0.3440 | 0.5976 | 0.5810 | 0.3364 | 0.9037 | 0.2623 | 0.5042 |
65
+
66
+
67
+ ### Code version
68
+ |Model|Size|JHumanEval|HumanEval|
69
+ |---|---|---|---|
70
+ | | |pass@1|pass@1|
71
+ | CyberAgentLM2-7B |7B| ||
72
+ | Llama 2 |7B| ||
73
+ | japanese-stablelm-base-beta-7b|7B| ||
74
+ | japanese-stablelm-base-ja_vocab-beta-7b|7B| ||
75
+ | ELYZA-japanese-Llama-2-7b|7B| ||
76
+ | ELYZA-japanese-Llama-2-7b-fast|7B| ||
77
+ | youri-7b (base) |7B| ||
78
+ | Swallow-7b |7B| ||
79
+ | Swallow-7b-plus |7B| ||
80
+ | Qwen-7B |7B| ||
81
+ | nekomata-7b |7B| ||
82
+ | Mistral-7B-v0.1 |7B| ||
83
+ | japanese-stablelm-base-gamma-7b|7B| ||
84
+ | Swallow-MS-7b-v0.1 |7B| ||
85
+
86
+ ## Usage
87
+
88
+ First install additional dependencies in [requirements.txt](./requirements.txt):
89
+
90
+ ```sh
91
+ pip install -r requirements.txt
92
+ ```
93
+
94
+ ### Use the base model
95
+
96
+ ```python
97
+ from transformers import AutoModelForCausalLM, AutoTokenizer
98
+
99
+ model_name = "tokyotech-llm/Swallow-MS-7b-v0.1"
100
+ tokenizer = AutoTokenizer.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map="auto")
101
+
102
+ model = AutoModelForCausalLM.from_pretrained(model_name)
103
+ prompt = "東京工業大学の主なキャンパスは、"
104
+ input_ids = tokenizer.encode(
105
+ prompt,
106
+ add_special_tokens=False,
107
+ return_tensors="pt"
108
+ )
109
+ tokens = model.generate(
110
+ input_ids.to(device=model.device),
111
+ max_new_tokens=128,
112
+ temperature=0.99,
113
+ top_p=0.95,
114
+ do_sample=True,
115
+ )
116
+
117
+ out = tokenizer.decode(tokens[0], skip_special_tokens=True)
118
+ print(out)
119
+ ```
120
+
121
+ ## Training Datasets
122
+
123
+ ### Continual Pre-Training
124
+ The following datasets were used for continual pre-training.
125
+
126
+ - [Algebraic Stack](https://huggingface.co/datasets/EleutherAI/proof-pile-2)
127
+ - [Japanese Wikipedia](https://dumps.wikimedia.org/other/cirrussearch)
128
+ - [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb)
129
+ - [Swallow Corpus](https://chokkan.org/temp/tokyotech-llm/swallow-corpus)
130
+ - [The Pile](https://huggingface.co/datasets/EleutherAI/pile)
131
+
132
+ ## Risks and Limitations
133
+
134
+ The models released here are still in the early stages of our research and development and have not been tuned to ensure outputs align with human intent and safety considerations.
135
+
136
+ ## Acknowledgements
137
+
138
+ We thank Mistral AI for releasing Mistral 7B v0.1 under an open license for others to build on.
139
+
140
+ Our project is supported by the [ABCI Large-scale Language Model Building Support Program](https://abci.ai/en/link/llm_support_program.html) of the National Institute of Advanced Industrial Science and Technology.
141
+
142
+ ## License
143
+
144
+ apache-2.0
145
+
146
+ ## Authors
147
+
148
+ Here are the team members:
149
+ - From [Okazaki Laboratory](https://www.nlp.c.titech.ac.jp/index.en.html), the following members:
150
+ - [Naoaki Okazaki](https://www.chokkan.org/index.ja.html)
151
+ - [Sakae Mizuki](https://s-mizuki-nlp.github.io/)
152
+ - [Hiroki Iida](https://meshidenn.github.io/)
153
+ - [Mengsay Loem](https://loem-ms.github.io/)
154
+ - [Shota Hirai](https://huggingface.co/Kotemo428)
155
+ - [Kakeru Hattori](https://aya-se.vercel.app/)
156
+ - [Masanari Ohi](https://twitter.com/stjohn2007)
157
+ - From [YOKOTA Laboratory](https://www.rio.gsic.titech.ac.jp/en/index.html), the following members:
158
+ - [Rio Yokota](https://twitter.com/rioyokota)
159
+ - [Kazuki Fujii](https://twitter.com/okoge_kaz)
160
+ - [Taishi Nakamura](https://twitter.com/Setuna7777_2)