omkarthawakar
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -11,18 +11,27 @@ pipeline_tag: text-generation
|
|
11 |
|
12 |
# MobiLlama-1B-Chat
|
13 |
|
|
|
|
|
14 |
We present MobiLlama-1.2B-Chat, an instruction following model finetuned on [MBZUAI/MobiLlama-1B](https://huggingface.co/MBZUAI/MobiLlama-1B).
|
15 |
|
|
|
|
|
|
|
|
|
|
|
|
|
16 |
## Model Description
|
17 |
|
18 |
-
- **Model type:** Language
|
19 |
- **Language(s) (NLP):** English
|
20 |
- **License:** Apache 2.0
|
21 |
- **Resources for more information:**
|
22 |
-
- [
|
23 |
-
- [
|
24 |
-
|
25 |
|
|
|
26 |
# Loading MobiLlama-1B-Chat
|
27 |
|
28 |
```python
|
@@ -48,6 +57,15 @@ Alternatively, you may use [FastChat](https://github.com/lm-sys/FastChat):
|
|
48 |
python3 -m fastchat.serve.cli --model-path MBZUAI/MobiLlama-1B-Chat
|
49 |
```
|
50 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
51 |
|
52 |
## Hyperparameters
|
53 |
| Hyperparameter | Value |
|
@@ -86,7 +104,9 @@ python3 -m fastchat.serve.cli --model-path MBZUAI/MobiLlama-1B-Chat
|
|
86 |
| Winogrande | 0.5659 | 0.5966 |
|
87 |
|
88 |
|
89 |
-
##
|
90 |
-
|
91 |
|
92 |
-
|
|
|
|
|
|
11 |
|
12 |
# MobiLlama-1B-Chat
|
13 |
|
14 |
+
<center><img src="MobileLLaMa.png" alt="mobillama logo" width="300"/></center>
|
15 |
+
|
16 |
We present MobiLlama-1.2B-Chat, an instruction following model finetuned on [MBZUAI/MobiLlama-1B](https://huggingface.co/MBZUAI/MobiLlama-1B).
|
17 |
|
18 |
+
## Model Summary
|
19 |
+
|
20 |
+
"Bigger the better" has been the predominant trend in recent Large Language Models (LLMs) development. However, LLMs do not suit well for scenarios that require on-device processing, energy efficiency, low memory footprint, and response efficiency. These requisites are crucial for privacy, security, and sustainable deployment. This paper explores the ‘less is more’ paradigm by addressing the challenge of designing accurate yet efficient Small Language Models (SLMs) for resource-constrained devices. Our primary contribution is the introduction of an accurate and fully transparent open-source 0.5 billion (0.5B) parameter SLM, named MobiLlama, catering to the specific needs of resource-constrained computing with an emphasis on enhanced performance with reduced resource demands. MobiLlama is a SLM design that initiates from a larger model and applies a careful parameter sharing scheme to reduce both the pre-training and the deployment cost. Our work strives to not only bridge the gap in open-source SLMs but also ensures full transparency, where complete training data pipeline, training code, model weights, and over 300 checkpoints along with evaluation codes are available on our [Github](https://github.com/mbzuai-oryx/MobiLlama).
|
21 |
+
|
22 |
+
[Arxiv Paper Link]('')
|
23 |
+
|
24 |
## Model Description
|
25 |
|
26 |
+
- **Model type:** Small Language Model (SLM) built using the architecture design of LLaMA-7B
|
27 |
- **Language(s) (NLP):** English
|
28 |
- **License:** Apache 2.0
|
29 |
- **Resources for more information:**
|
30 |
+
- [Training Code](https://github.com/mbzuai-oryx/MobiLlama)
|
31 |
+
- [Data Preparation](https://github.com/LLM360/amber-data-prep)
|
32 |
+
- [Fully processed Amber pretraining data](https://huggingface.co/datasets/LLM360/AmberDatasets)
|
33 |
|
34 |
+
|
35 |
# Loading MobiLlama-1B-Chat
|
36 |
|
37 |
```python
|
|
|
57 |
python3 -m fastchat.serve.cli --model-path MBZUAI/MobiLlama-1B-Chat
|
58 |
```
|
59 |
|
60 |
+
# MobiLlama-1B-Chat Finetuning Details
|
61 |
+
|
62 |
+
## DataMix
|
63 |
+
| Subset | Number of rows | License |
|
64 |
+
| ----------- | ----------- | ----------- |
|
65 |
+
| WizardLM/WizardLM_evol_instruct_V2_196k | 143k | |
|
66 |
+
| icybee/share_gpt_90k_v1 | 90k | cc0-1.0 |
|
67 |
+
| Total | 233k | |
|
68 |
+
|
69 |
|
70 |
## Hyperparameters
|
71 |
| Hyperparameter | Value |
|
|
|
104 |
| Winogrande | 0.5659 | 0.5966 |
|
105 |
|
106 |
|
107 |
+
## Citation
|
108 |
+
**BibTeX:**
|
109 |
|
110 |
+
```bibtex
|
111 |
+
coming soon
|
112 |
+
```
|