Converting to GGUF failed, do you have a method?
#4
by
JLouisBiz
- opened
Requirement already satisfied: mpmath<1.4,>=1.1.0 in /home/data1/protected/TTS/lib/python3.11/site-packages (from sympy==1.13.1->torch) (1.3.0)
Requirement already satisfied: MarkupSafe>=2.0 in /home/data1/protected/TTS/lib/python3.11/site-packages (from jinja2->torch) (3.0.2)
(TTS) ~/Programming/llamafile/Bling
$ python ../../git/llama.cpp/convert_hf_to_gguf.py bling-phi-3.5 --outfile quantized/bling-phi-3.5.gguf --outtype auto
INFO:hf-to-gguf:Loading model: bling-phi-3.5
Traceback (most recent call last):
File "/home/data1/protected/Programming/llamafile/Bling/../../git/llama.cpp/convert_hf_to_gguf.py", line 4637, in <module>
main()
File "/home/data1/protected/Programming/llamafile/Bling/../../git/llama.cpp/convert_hf_to_gguf.py", line 4617, in main
model_instance = model_class(dir_model=dir_model, ftype=output_type, fname_out=fname_out,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/data1/protected/Programming/llamafile/Bling/../../git/llama.cpp/convert_hf_to_gguf.py", line 102, in __init__
_, first_tensor = next(self.get_tensors())
^^^^^^^^^^^^^^^^^^^^^^^^
StopIteration
(TTS) ~/Programming/llamafile/Bling
Even though I have torch, I am getting this error.
My repository wasn't in sync, let me close this.
JLouisBiz
changed discussion status to
closed