Replies: 4 comments 1 reply
-
Same issue while using gguf-my-repo. Ever figure out what caused this? |
Beta Was this translation helpful? Give feedback.
-
I have the same issue. any update? |
Beta Was this translation helpful? Give feedback.
-
In my case, the model I was trying to convert had a bad tokenizer due to a mergekit bug which has since been fixed. Redoing the merge with "embed_slerp: true" allowed it to be converted to gguf. |
Beta Was this translation helpful? Give feedback.
-
This is because those models treat special tokens as regular tokens. There can only be 32,000 regular tokens, and if there are new ones, they must be included among the special tokens. |
Beta Was this translation helpful? Give feedback.
-
I want to convert the fine-tuned Code Llama model file into a GGUF file, but got the following command:
python convert_hf_to_gguf.py /data/fine-tune/code_llama_fine_tune --outfile test.gguf --outtype q8_0
But I got an out an range error:
Beta Was this translation helpful? Give feedback.
All reactions