R7B - Unsafe Results
@woofwolfy - Not sure if it was an issue with the quant process or a false positive, it was done as usual, I'll wait to hear back from HF and see if it's possible to solve, if not, well, it wasn't meant to be. The template was unmodified from the original, ugh.
@woofwolfy - Not sure if it was an issue with the quant process or a false positive, it was done as usual, I'll wait to hear back from HF and see if it's possible to solve, if not, well, it wasn't meant to be. The template was unmodified from the original, ugh.
As far as I remember when you uploaded the first few quants everything was fine, but then when the rest appeared something happened
I'm leaning on false positives because of their template. I'll wait to hear from someone.
Trying to download it manually, even the llama.cpp release itself is getting flagged:
- https://github.com/ggerganov/llama.cpp/releases/download/b4416/llama-b4416-bin-win-cuda-cu11.7-x64.zip
- https://github.com/ggerganov/llama.cpp/releases/download/b4416/llama-b4416-bin-win-cuda-cu12.4-x64.zip
I am able to fetch it from the CLI but like, weird.
I'm confused.
But not on VT?
Now this is very interesting, could b4416 be infected (?)
Can you link your VT scans please?
I did open an issue to ask about this:
https://github.com/ggerganov/llama.cpp/issues/11077
The size of this template though...:
https://github.com/ggerganov/llama.cpp/issues/11077#issuecomment-2571396549
I think that's causing the flagging, both for the builds as well as for the quants which retain the template, protectai also pointed.
llama-b4416-bin-win-cuda-cu11.7-x64.zip https://www.virustotal.com/gui/file/762f91efbfa8e278652b39c390354751e9a124ad1991cdabcadb4b32eabe5075
llama-b4416-bin-win-cuda-cu12.4-x64.zip https://www.virustotal.com/gui/file/16ab6d0369e267aeaa6a2d288fa106c2676ddcdc4f32b1e7237dc3673fb54071
Will keep an eye on these: https://huggingface.co/Lewdiculous/llama.cpp-11077-test-01/tree/main
Ah, yeah, bingo:
https://www.virustotal.com/gui/file/762f91efbfa8e278652b39c390354751e9a124ad1991cdabcadb4b32eabe5075
I've scanned other releases they all have the same triggers
I'm kind of pointing at these template changes:
https://www.diffchecker.com/HJ5zHE2p/
(Then vs Now)
But there where made by the Model authors. I just make sure to use their latest repo:
You need to get access to the repo: https://huggingface.co/CohereForAI/c4ai-command-r7b-12-2024
But this seems to be the issue: https://github.com/ggerganov/llama.cpp/issues/11077#issuecomment-2571404787
Welp, I reached out to let them know and see if this can be adjusted.
https://huggingface.co/CohereForAI/c4ai-command-r7b-12-2024/discussions/10