Spaces:
Running
on
CPU Upgrade
14B model detected as 7B
I've been working on merging a 14 billion parameter model recently, but when it comes time to evaluate the model, the system indicates that the model has only 7 billion parameter instead of the expected 14 billion. It's funny when the top 7 billion model is actually 14 billion
When you filter 7-8B size on the spaces, more than 10+ model is actually 14B
There are quite a few models in the leaderboard where the indicated size is half the actual size:
- maldv/Qwentile2.5-32B-Instruct
- CultriX/Qwen2.5-14B-Wernickev3
...and many others, most of them Qwen-derived.
Hi! Thanks for the report!
We extract the number of parameters from the safetensors files automatically, in theory -
@alozowski
will be able to investigate why there is a mismatch when she comes back from vacations
For the difference between the comparator and leaderboard, make sure you compare either raw or normalised scores on both (we have 2 ways to compute scores, it should be explained in the FAQ)