File size: 1,277 Bytes
0f04cc7
 
 
 
 
12155e1
0f04cc7
12155e1
0f04cc7
 
 
 
12155e1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0f04cc7
 
 
12155e1
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
---
library_name: transformers
tags: []
---

# Jamba-Small v1

This is a pruned version of AI21 Labs' Jamba-v0.1 model that is ~25% the size of Jamba-v0.1.



## Model Details
Whereas Jamba-v0.1 contains 4 Jamba blocks, Jamba-Small contains only 1 Jamba block.
Jamba-Small's Jamba blocks follow the same structure seen in Jamba-v0.1, with a 1:7 ratio of attention-to-Mamba layers and MoE applied every 2 layers.

Jamba-Small's weights are initialized from various layers in the original Jamba-v0.1 model. For v1, the layer weights are mapped as follows (left is Jamba-Small layer number, right is Jamba-v0.1 layer number):
```
0: 0
1: 1
2: 2
3: 3
4: 4
5: 5
6: 30
7: 31
```

Note that no additional fine-tuning has been performed on this model. As such, its performance is exceptionally poor. This should not be used in production without additional training.

### Model Description

- **Developed by:** Nathan Brown (OxxoCodes)
- **Compute provided by [optional]:** Clemson Palmetto Cluster
- **Model type:** Joint Attention and Mamba (Jamba)
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Original model:** [Jamba-v0.1](https://huggingface.co/ai21labs/Jamba-v0.1)
- **Jamba paper:** [https://arxiv.org/pdf/2403.19887.pdf](https://arxiv.org/pdf/2403.19887.pdf)