https://huggingface.co/AuraIndustries/Aura-MoE-2x4B

#496
by jeiku - opened

https://huggingface.co/AuraIndustries/Aura-MoE-2x4B

Trying a MoE this time, FFT on the KTO set to heal. It should work in llama.cpp, I was able to convert with no issues.

Thank you!

Then I am not worried :)

mradermacher changed discussion status to closed

Unfortunately Missing importance matrix for tensor blk.0.ffn_gate_exps.weight in a very low-bit quantization, meaning no good imatrix quants.

That happens when the tensors are not covered 100% during imatrix measurements. For whatever reason.

Ah... Something was bound to go wrong with this abomination of a model. No big deal to me to just have static quants for this. Thanks for letting me know!

sure :)

Sign up or log in to comment