RichardErkhov commited on
Commit
0e29131
·
verified ·
1 Parent(s): 3e28b9f

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +201 -0
README.md ADDED
@@ -0,0 +1,201 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ TowerBase-7B-v0.1 - bnb 8bits
11
+ - Model creator: https://huggingface.co/Unbabel/
12
+ - Original model: https://huggingface.co/Unbabel/TowerBase-7B-v0.1/
13
+
14
+
15
+
16
+
17
+ Original model description:
18
+ ---
19
+ license: cc-by-nc-4.0
20
+ language:
21
+ - en
22
+ - de
23
+ - fr
24
+ - zh
25
+ - pt
26
+ - nl
27
+ - ru
28
+ - ko
29
+ - it
30
+ - es
31
+ metrics:
32
+ - comet
33
+ pipeline_tag: translation
34
+ model-index:
35
+ - name: TowerBase-7B-v0.1
36
+ results:
37
+ - task:
38
+ type: text-generation
39
+ name: Text Generation
40
+ dataset:
41
+ name: AI2 Reasoning Challenge (25-Shot)
42
+ type: ai2_arc
43
+ config: ARC-Challenge
44
+ split: test
45
+ args:
46
+ num_few_shot: 25
47
+ metrics:
48
+ - type: acc_norm
49
+ value: 51.02
50
+ name: normalized accuracy
51
+ source:
52
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Unbabel/TowerBase-7B-v0.1
53
+ name: Open LLM Leaderboard
54
+ - task:
55
+ type: text-generation
56
+ name: Text Generation
57
+ dataset:
58
+ name: HellaSwag (10-Shot)
59
+ type: hellaswag
60
+ split: validation
61
+ args:
62
+ num_few_shot: 10
63
+ metrics:
64
+ - type: acc_norm
65
+ value: 77.68
66
+ name: normalized accuracy
67
+ source:
68
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Unbabel/TowerBase-7B-v0.1
69
+ name: Open LLM Leaderboard
70
+ - task:
71
+ type: text-generation
72
+ name: Text Generation
73
+ dataset:
74
+ name: MMLU (5-Shot)
75
+ type: cais/mmlu
76
+ config: all
77
+ split: test
78
+ args:
79
+ num_few_shot: 5
80
+ metrics:
81
+ - type: acc
82
+ value: 43.48
83
+ name: accuracy
84
+ source:
85
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Unbabel/TowerBase-7B-v0.1
86
+ name: Open LLM Leaderboard
87
+ - task:
88
+ type: text-generation
89
+ name: Text Generation
90
+ dataset:
91
+ name: TruthfulQA (0-shot)
92
+ type: truthful_qa
93
+ config: multiple_choice
94
+ split: validation
95
+ args:
96
+ num_few_shot: 0
97
+ metrics:
98
+ - type: mc2
99
+ value: 37.29
100
+ source:
101
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Unbabel/TowerBase-7B-v0.1
102
+ name: Open LLM Leaderboard
103
+ - task:
104
+ type: text-generation
105
+ name: Text Generation
106
+ dataset:
107
+ name: Winogrande (5-shot)
108
+ type: winogrande
109
+ config: winogrande_xl
110
+ split: validation
111
+ args:
112
+ num_few_shot: 5
113
+ metrics:
114
+ - type: acc
115
+ value: 72.06
116
+ name: accuracy
117
+ source:
118
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Unbabel/TowerBase-7B-v0.1
119
+ name: Open LLM Leaderboard
120
+ - task:
121
+ type: text-generation
122
+ name: Text Generation
123
+ dataset:
124
+ name: GSM8k (5-shot)
125
+ type: gsm8k
126
+ config: main
127
+ split: test
128
+ args:
129
+ num_few_shot: 5
130
+ metrics:
131
+ - type: acc
132
+ value: 13.12
133
+ name: accuracy
134
+ source:
135
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Unbabel/TowerBase-7B-v0.1
136
+ name: Open LLM Leaderboard
137
+ ---
138
+ # Model Card for TowerBase-7B-v0.1
139
+
140
+ ## Model Details
141
+
142
+ ### Model Description
143
+
144
+ TowerBase-7B is a language model that results from continuing the pretraining of Llama 2 on a mix of 20 billion tokens of monolingual data in ten different languages — English, Portuguese, Spanish, French, German, Dutch, Italian, Korean, Chinese, Russian — and bilingual data. TowerBase-7B-v0.1 is the first model in the series.
145
+ The resulting model shows improved performance on the supported languages, while maintaining Llama 2's capabilities on English. It is particularly well-suited for fine-tuning on translation and related tasks: check out [TowerInstruct](https://huggingface.co/Unbabel/TowerInstruct-7B-v0.1).
146
+
147
+ We will release more details in the upcoming technical report.
148
+
149
+ - **Developed by:** Unbabel, Instituto Superior Técnico, CentraleSupélec University of Paris-Saclay
150
+ - **Model type:** A 7B parameter model built on top of Llama 2 by continuing pretraining on multilingual data.
151
+ - **Language(s) (NLP):** English, Portuguese, Spanish, French, German, Dutch, Italian, Korean, Chinese, Russian
152
+ - **License:** CC-BY-NC-4.0, Llama 2 is licensed under the LLAMA 2 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved.
153
+
154
+ ## Intended uses & limitations
155
+
156
+ The model is intended for research purposes in the 10 languages it supports.
157
+ The model is able to perform well on translation and related tasks (e.g., APE, GEC) on a few-shot regime.
158
+ It can also be fine-tuned to perform these tasks in a zero-shot fashion (see [TowerInstruct](https://huggingface.co/Unbabel/TowerInstruct-7B-v0.1), as well as other multilingual tasks.
159
+
160
+ ### Out-of-Scope Use
161
+
162
+ The model is not guaranteed to perform well for languages other than the 10 languages it supports.
163
+
164
+ ## Bias, Risks, and Limitations
165
+
166
+ TowerBase-v0.1 has not been aligned to human preferences, so the model may generate problematic outputs (e.g., hallucinations, harmful content, or false statements).
167
+
168
+ ## Run the model
169
+
170
+ ```python
171
+ from transformers import AutoModelForCausalLM, AutoTokenizer
172
+
173
+ model_id = "Unbabel/TowerBase-7B-v0.1"
174
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
175
+
176
+ model = AutoModelForCausalLM.from_pretrained(model_id)
177
+
178
+ text = "English: My name is TowerBase.\nPortuguese:"
179
+ inputs = tokenizer(text, return_tensors="pt")
180
+
181
+ outputs = model.generate(**inputs, max_new_tokens=20)
182
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
183
+ ```
184
+
185
+ ### Training Data
186
+
187
+ Filtered versions of [mc4](https://huggingface.co/datasets/mc4) and bilingual data from various sources (e.g., [OPUS](https://opus.nlpl.eu/)).
188
+
189
+ ## Citation
190
+
191
+ ```bibtex
192
+ @misc{tower_llm_2024,
193
+ title={Tower: An Open Multilingual Large Language Model for Translation-Related Tasks},
194
+ author={Duarte M. Alves and José Pombal and Nuno M. Guerreiro and Pedro H. Martins and João Alves and Amin Farajian and Ben Peters and Ricardo Rei and Patrick Fernandes and Sweta Agrawal and Pierre Colombo and José G. C. de Souza and André F. T. Martins},
195
+ year={2024},
196
+ eprint={2402.17733},
197
+ archivePrefix={arXiv},
198
+ primaryClass={cs.CL}
199
+ }
200
+ ```
201
+