Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -8,4 +8,4 @@ Mixture of Tokens is a fully-differentiable model that retains the benefits of M
|
|
8 |
|
9 |
|
10 |
## Tips:
|
11 |
-
During inference, the model's computational performance is derived from combining tokens across batches into groups of a specified size, denoted as group_size in the model configuration. If the batch size is not evenly divisible by `group_size`, the model will internally pad the batch to ensure divisibility. To achieve optimal performance, it is advisable to conduct batched inference using a batch size that is a multiple of `group_size`.
|
|
|
8 |
|
9 |
|
10 |
## Tips:
|
11 |
+
During inference, the model's computational performance is derived from combining tokens across batches into groups of a specified size, denoted as `group_size` in the model configuration. If the batch size is not evenly divisible by `group_size`, the model will internally pad the batch to ensure divisibility. To achieve optimal performance, it is advisable to conduct batched inference using a batch size that is a multiple of `group_size`.
|