Function call validation approach?

some more details: https://platform.openai.com/docs/guides/fine-tuning/fine-tuning-examples (under the function calling section) this is what I’m using for reference to keep the history accurate in regards to the base function calling pattern, it might help to accurately produce the function response object.

I think beyond this feedback pattern along with correctly programmed retry mechanics; what you’d be looking at now is better prompt engineering for your functions~ try to address all your failure cases in either the system prompt or the function description prompt.

A good thing to know when creating more complicated structures is some take-aways from this thread: How to calculate the tokens when using function call - #9 by _j where it shows that, among other things, "description"s for properties of type “object” properties are not included.
Basically your json gets parsed into a typescript-like interface-like structure, where things like description and default end up as // comments and stuff, but openai’s “parser”/“transformer” from Json to Ts is very limited (you’ll find a couple of people, including myself, who are writing their own implementations to overcome these limitations)~

Edit:
actually this got me interested and I decided to have a conversation with chatgpt about it https://chat.openai.com/share/b698c075-b36a-4739-9d78-a7845e3b3a11

of course a lot of it might just be hallucinations, but it keeps telling me that it only expects output from the function role for a successful call, where as otherwise it expects a system message, in this thread it’s talking about structures like the one @supershaneski mentioned (error: ... message: ...)

so perhaps the original approach of using system messages might actually be more in-line of what the bot expects/is trained on. the docs don’t really address this~

1 Like