not working well when function calling
#1
by
huayranus
- opened
Thanks for testing this out.
Curl Commands
This is tricky to do because you need to have the prompt correctly formatted. One option is to copy-paste the sample formatted prompt on the model card and use that.OpenAI Style requests
The problem you're having there is that the end token for Llama 3 is not the same as the eos_token in the tokenizer_config.json file. I've swapped the eos_token now, so if you re-run with an openai style endpoint (whether tgi or vllm) it should produce clean results.
Note still the following caveats:
- The model correctly calls functions, but does not do a good job making use of function responses.
- Unlike openchat 3.5 (the function calling fine-tune), the model is weak at chaining function calls.
Thanks a lot, now it works!
RonanMcGovern
changed discussion status to
closed