Chat gpt function calling, requited parameters are ignored

Hi, i am playing with function calling …

it seems that required properties are ignored …
given the function below
I ask “list my bookings” .
the response is

{
“userName”: “exampleUser”,
“bookingStatus”: “NotSet”
}

but since the parameter is marked as required , why doesn’t gpt reply with a follow up question , asking me to specify the userName ?

 {
            "name": "GetBookingsByUser",
            "description": "returns the bookings available for the user",
            "parameters": {
                "$schema": "http://json-schema.org/draft-04/schema#",
                "type": "object",
                "additionalProperties": false,
                **"required": ["bookingStatus", "userName"],**
                "properties": {
                    "userName": {
                        "type": "string",
                        "description": "The userName"
                    },
                    "bookingStatus": {
                        "type": "string",
                        "description": "The booking Status",
                        "x-nullable": true,
                        "enum": ["NotSet", "Pending", "Confirmed", "Cancelled"]
                    }
                }
            }
        },
1 Like

Why don’t you tell it to ask follow up questions if it isn’t given the required information? Put it in a system message or something.

no way :
I set this as system message : “If you identify the need to reply with a function call, but some required parameter are missing, reply with a follow up question asking for missing parameters”

but I got

{
“userName”: “your_username”,
“bookingStatus”: “NotSet”
}

In my testing, you need to handle such cases yourself. If you get a malformed result, you need to send an error reply.

For example, in your case, you send something like this to the final chat completions call and it will make a follow up question.

messages: [
{ "role": "assistant", "content": null, "function_call": { "name": "GetBookingsByUser", "arguments": "{\“userName\”: \“your_username\”,\“bookingStatus\”: \“NotSet\”}" } },
{"role": "function", "name": "GetBookingsByUser", "content": "{\“error\”: \“Invalid userName\”,}"},
]
1 Like

I’ve been facing this issue significantly as well, despite all manner of prompt engineering and even using the gpt4 latest api.

I’ve simply resorted to using my fn to handle the 30% of cases where the arg is missing. Adds more complexity but let’s you be more sure

So there’s zero magic that OpenAI is doing here. As awesome as functions sound in principle there are zero guarantees the model will comply and call your function with the required parameters. You need a guard to catch whether the model returned a valid function and then you need to tell the model to correct its response if it returned something invalid.

I would also say don’t use passive feedback to the model. Tell the model the specific changes you want it to make to its response:

  • return JSON
  • add a “plan” property to your response
  • etc.

I give this level of feedback to the model in my AlphaWave project and it’s literally been several thousand model calls since I’ve seen the model return bad JSON.

Thank you for your suggestions.
It would be better if open.ai did not put false expectations with samples such as one the reported here

I tried your approach and it seems the only one working.
On the contrary,
Using “strong system messages”, adding hints in parameters description did not help.

Still it takes sometime to get the full black list of “fake values” chstgpt returns for a parameter such as email or username.

1 Like

During my testing with 3.5-turbo chat completion, ‘required’ parameter was consistently honored until i added ‘temperature’. also had issue triggering the function calling until i eliminated the system message. really hope OpenAI can offer some kind of support here

You can force it to choose a function (over a chat response) by populating the function_call field of your request and passing in the name of the function

That’s weird, on the contrary in my situation it was calling “aggressivly” a function making up parameters such as username or email…

Supposing open ai is not changing something silently on the model over time, the only thing we can do is to compare the function definition, including the description on function and parameters