no way :
I set this as system message : “If you identify the need to reply with a function call, but some required parameter are missing, reply with a follow up question asking for missing parameters”
So there’s zero magic that OpenAI is doing here. As awesome as functions sound in principle there are zero guarantees the model will comply and call your function with the required parameters. You need a guard to catch whether the model returned a valid function and then you need to tell the model to correct its response if it returned something invalid.
I would also say don’t use passive feedback to the model. Tell the model the specific changes you want it to make to its response:
return JSON
add a “plan” property to your response
etc.
I give this level of feedback to the model in my AlphaWave project and it’s literally been several thousand model calls since I’ve seen the model return bad JSON.
I tried your approach and it seems the only one working.
On the contrary,
Using “strong system messages”, adding hints in parameters description did not help.
Still it takes sometime to get the full black list of “fake values” chstgpt returns for a parameter such as email or username.
During my testing with 3.5-turbo chat completion, ‘required’ parameter was consistently honored until i added ‘temperature’. also had issue triggering the function calling until i eliminated the system message. really hope OpenAI can offer some kind of support here
You can force it to choose a function (over a chat response) by populating the function_call field of your request and passing in the name of the function
That’s weird, on the contrary in my situation it was calling “aggressivly” a function making up parameters such as username or email…
Supposing open ai is not changing something silently on the model over time, the only thing we can do is to compare the function definition, including the description on function and parameters