|
--- |
|
license: mit |
|
tags: |
|
- gguf |
|
- wip |
|
base_model: |
|
- 1bitLLM/bitnet_b1_58-3B |
|
model_type: bitnet |
|
quantized_by: Green-Sky |
|
language: |
|
- en |
|
--- |
|
|
|
# Highly experimental, not for general consumption |
|
The code needed to running this model, as well as the base model itself are not ready yet. |
|
|
|
This is uploaded merely to help testing. |
|
|
|
~see https://github.com/ggerganov/llama.cpp/pull/7931~ |
|
|
|
see https://github.com/ggerganov/llama.cpp/pull/8151 , the continued work by compilade, providing both 1.625bpw and 2bpw |
|
|
|
> [!IMPORTANT] |
|
> This model is unsupported by the new `TQ1_0` and `TQ2_0` quants and the old formats havebeen/willbe removed. |
|
> This model is unfortunaly sized inconvenient and is currently not supported by the new quants. |
|
> see https://huggingface.co/Green-Sky/TriLM_3.9B-GGUF for a more up-to-date model and quant |