The Assistant will never recognize a required parameter that is of object type in Function Tools

{
     "type": "function",
     "function": {
       "name": "debug_function",
       "description": "Used to test the Assistant's ability to understand parameters.",
       "parameters": {
         "type": "object",
         "properties": {
           "MyCustomObject": {
             "type": "object",
             "properties": {
                "test_str": {"type":"string"},
                "test_int": {"type":"integer"}
             },
             "additionalProperties": {
               "type": "string"
             },
             "minProperties": 1,
             "description": "An object with unknown keys. Each key-value pair represents an item and its quantity."
           }
         },
         "required": ["MyCustomObject"]
       }
     }
   }

When prompting the Assistant to call the debug_function with something like: Please call your debug function., it will never ask you for a value for MyCustomObject. Even if you specifically prompt it: Please run your debug function. Ask me for any parameters that you require., it will execute the function with no arguments, and even respond: Assistant: The debug function has been executed successfully. There are no parameters required for this function. If you have any other requests or need assistance with specific tasks, please let me know!

I thought perhaps that minProperties or additionalProperties might not be supported and is breaking the Assistant, but no matter what schema I add or remove, it won’t see the object as a required parameter.

The object PROPERTIES are what you specify in required.
So you could have test_str and test_int as your required parameters. Not the whole object.

I updated your function.

{
  "name": "call_debug_function",
  "description": "Executes the debug function.",
  "parameters": {
    "type": "object",
    "properties": {
      "MyCustomObject": {
        "type": "object",
        "properties": {
          "key": {
            "type": "string"
          },
          "value": {
            "type": "integer"
          }
        },
        "required": [
          "key",
          "value"
        ],
        "description": "A custom debug object with variable keys. Each key-value pair represents an item and its quantity."
      }
    },
    "required": [
      "MyCustomObject"
    ]
  }
}

The instruction is no help

  • call_debug_function, calls the debug function, requires key from user.

But validating the return value from the API helps

{“status”: “invalid value”, “message”: “please ask the user to provide key”}

Playground:

You still have ‘mycustomobject’ as required
The function also does not work for more than one key.
You should also consider adding more information about how to function should be called in the description and lastly you should add informaiton “description” in for each ‘field’ in the object to describe what you are expecting.

Lastly you seem to want your intent function to FORCE the intent (calling the debug function?) but how much informaiton is in the prompt of the assistant about what to DO - ie asking the user about data (key etc).

Much to go over after more investigation.

First, we’re talking about workarounds and cajoling the model into understanding intent that is clearly defined in the JSON schema. minProperties and required are not suggestions. Just as we shouldn’t have to instruct the model to make a string property actually be a string, we shouldn’t have to instruct the model to require a required property. Hopefully they fix this in the near future.

Remember that longer instruction sets cost more tokens, which means more money and time. It should be a last resort to use the instructions to solve this problem.

Second, this raises the question of whether the Assistant should be allowed to validate input at all, or if it should all be done manually against the arguments in client.beta.threads.runs.retrieve(thread_id=thread_id, run_id=run.id).required_action.submit_tool_outputs.tool_calls. Taking validation out of the hands of the assistant offers the greatest control, but at the cost of developing a secondary input system. I think that in most cases where an Assistant is calling a function, it should be the sole interface for that function.

Third, I suspect this issue might be caused by the Assistant modifying the prompt. I spent the night making many variations of instructions and the format of the function’s schema. There were some combinations in which gpt-4-0125-preview correctly prompted the user for input, where gpt-3.5-turbo-0125 failed. So maybe the entire issue here is that the Assistant is changing “run the debug function” to “run the debug function and make stuff up if you have to”. How can I find out if it’s doing that?

Finally, I found that adding something like: “Always prompt the user to input the required parameters for any function that needs them before proceeding. Never make up values for required parameters.” to the instructions will make it work, but what’s interesting is that if I then delete that line from the instructions, it still works afterwards. Why could that be?