Update README.md
Browse files
README.md
CHANGED
@@ -23,9 +23,6 @@ The model has hybrid architecture with Mamba and Attention heads running in para
|
|
23 |
This model is ready for commercial use.
|
24 |
|
25 |
|
26 |
-
**[Caution] During generation, the batch size needs to be 1. Our current implementation does not fully support padding of Meta tokens + SWA; this is a work in progress. Training and pre-filling support any batch size.**
|
27 |
-
|
28 |
-
|
29 |
**Model Developer:** NVIDIA
|
30 |
|
31 |
**Model Dates:** Hymba-1.5B-Base was trained between September 1, 2024 and November 10th, 2024.
|
|
|
23 |
This model is ready for commercial use.
|
24 |
|
25 |
|
|
|
|
|
|
|
26 |
**Model Developer:** NVIDIA
|
27 |
|
28 |
**Model Dates:** Hymba-1.5B-Base was trained between September 1, 2024 and November 10th, 2024.
|