Problem with 'required' property when function calling

Hi,

I’m getting unexpected behaviour when using the required property in a ‘function calling’ definition, but it may be a misunderstanding of how they should work. It appears that the API will still fire a function call when some ‘required’ properties of my defintion have not yet been inferred and are still blank - it is then passing these empty values to my function. Please can someone outline how to use these ‘required’ properties so that chat continues as normal in the event that a required property is empty? I guess my broader question is: when does the ‘required’ property validation take place? Do I need to make another call to GPT to enforce this validation post function call or should it have kicked in prior to the function call?

Thanks in advance!

1 Like

Further to this, but related, is there a way of referencing the notion of a function from within a prompt? e.g. ‘do not call a function until…’. This might help with additional prompt-driven validation or applying some other criteria to prevent function calling where it may be inappropriate.

1 Like

I am seeing the same behaviour in multiple cases:

One example being as below where the “name” key is not part of the JSON.

The output json is: {“confidence” : 1 }

Schema is:

{
            "name": `single_name_matched`,
            "description": `Matched a apple laptop model name`,
            "parameters": {
                "type": "object",
                "properties": {
                    "name": {
                      "type": "string",
                      "description":  < some description>,
                    },
                    "confidence": {
                      "type": "number",
                      "description": `Confidence on match, a number between 0 to 1.`,
                    },
                },
                "required": ["name, "confidence"],
            },
          }

The chat completions endpoint seems less reliable to me compared to the completions endpoint

An elaborate chat completion prompt with function calling using 3.5 turbo seems to be worse than da vinci. Da vinci gave me more reliable json on the completions endpoint. It costed more tokens though.

I am also facing this issue, where despite specifying the “required” properties, The function still calls itself and returns with only some of the arguments, with some that had been required being missing. Why does it not comply with the “required” properties?

I have tried putting in system messages and property descriptions telling it to NEVER call function if required arguments are missing etc. but it still calls the function regardless. I am utterly stuck , would appreaciate some assistance.

Also I have the function calling completion call temperature to be 0 or 0.1 to make it comply yet to no avail.

2 Likes

Enums seem to work ok for me so far so it seems a validation stage is kicking in, why not for required?

I noticed that “required” and “enums” are not respected. Some of the scenarios are:

  • if there are any typos in the schema. e.g property name is misspelled in some place.
  • if there are special characters in the enum

Other cases?

It still seems the default completions API is more reliable than chat completion with functions.

yeah man. I’ve resorted to handling the oft times where there might be a missing argument despite it being ‘required’ in the function itself. Since I have been unable to wrangle both 3 or 4 model to get anywhere close to 95% success rate.

same here … see also Chat gpt function calling, requited parameters are ignored - #3 by sabbadin12

Has anyone had any better luck steering the function via its description in the function definition? Do we get better results if we use something like:

{
“name”: “DoAThing”,
“description”: “Do a thing function that fires when all required property values are captured.”,
“parameters”: {
“type”: “object”,
“properties”: { …

Hi,

Put yourself in the shoes of the AI, the information you have is that contained in the API you are making at that very second, you have no history and you have no idea of the internal states of your codes parameters and variables.

Given that information could you accurately and consistently return a valid function of that name with the required petametres and requirement? If the answer is Yes, great, the function will perform well in most situations, if the answer is No or maybe… then you will have problems with the function call.

In this case, we are focussed on the required parameter and user’s experiences with how well it works. Clearly there is some kind of validation layer somewhere that is expected to provide function output if required properties are populated but we are finding that is not always the case - yet enums, which I assume use a similar validation layer, seems to kick in more reliably (for me at least).

As you state, the assumption has always been that the property values should be inferred from the prompt alone, no need for internal states, codes or variables. I think you may be referring to whether the function fires at all but here it is firing too often. If any required properties have been unable to be inferred, the function should not fire.

I’ll do some more testing, but are you suggesting you have not experienced this issue? It is possible OpenAI have made progress on it.

There are no validation layers or checks of any kind, this is all created by the AI being trained and fine tuned to create json formed output and to pick from an appropriate array of text identifiers that are the function names and descriptions, it is all still the AI behind the curtain doing smart things with the information it has at hand.

When crafting function calls you need to think about what information the AI has at that moment in time, what are the function names like? Informative? would it be clear to the AI what the function does and when it should use it? Imagine the AI is a new employee and you need to explain to it everything it needs to complete the task.

I’m not sure if that is the only problem, but a closing quote is missing from “name” in your required section.

I will share some experience/light here since I had similar issue. I am using gpt4 and it was hallucinating with values that it has been trained on generic data. I was somehow able to override the hallucination by doing the following:

  • add this to the system message: Don't make assumptions about what values to plug into functions. Ask for clarification if a user request is ambiguous - I learned this from their playbook example
  • add this to the description of the required value: do not use 'default' and is required from user input - so in my case it sometimes try to use a default as value

Hope this helps

2 Likes

Sorry for necromancing this old thread, but @jknt: I wanted to point out that the JSON you posted here is actually broken.

Ignoring description (which you obviously elided on purpose), there’s a missing closing doublequote for the string name in the required array.

If the JSON you posted is exactly the JSON you passed to the model (sans description), that could certainly have contributed to the problems you were seeing.

1 Like

Resurrecting an old post that does not have a resolution is fine if you have a new insight as you do!

(Also Welcome!)

1 Like

Reliable is not so reliable, but should work:

  1. make a description for the parameter: “…if you cannot explicitly find the value for this parameter directly in immediate user input, the default value is ‘unspecified’”
  2. make it a required function parameter in the required list.

It seems not work for me…The AI still return fake parameters.

Hi, all!
I’m experiencing the same problem. Consistently the assistant is calling the function without any of the required parameters.
I have this as system instructions: Don't make assumptions about what values to plug into functions. Ask for clarification if a user request is ambiguous.
and this is the function that gets called with no parameters:

{
  "name": "mergeCustomers",
  "description": "merge customers, the user must provide both the primary and secondary customer ids. If they are not provided, you must ask for them.",
  "parameters": {
    "type": "object",
    "properties": {
      "primaryCustomerId": {
        "description": "customer that will be kept. if you cannot explicitly find the value for this parameter directly immediately ask for it.",
        "type": "object"
      },
      "secondaryCustomerId": {
        "description": "customer that will be deleted after the merge. if you cannot explicitly find the value for this parameter directly immediately ask for it.",
        "type": "object"
      }
    },
    "required": [
      "primaryCustomerId",
      "secondaryCustomerId"
    ]
  }
}

For I understand from the messages before, I should prepare my function to receive no parameters. Is this correct? There’s no way to force the assistant to honor the required parameters, correct?
Thanks in advance