Did openAI just break function calling?

Does anyone else have issues with function calling? Our app is currently experiencing issues with function calling since the results from the API call returns only strings.

We give this example in the prompt, which used to return the JSON correctly:

But from yesterday, the API call returns ‘points’ as strings, which will break our app.

I’ve tried a number of different prompts including something like below:

But nothing seems to return a correct JSON response. Also, I checked the function that we call and ‘points’ is defined as an integer enum and we didn’t change anything about the entire API call for a while but still something causes it to break.

Help is appreciated!

It seems like you aren’t actually using the “function-call” feature, but are instead crafting your own output through prompting.

If you are experiencing massive cognitive decline and an AI that won’t follow system instructions — welcome to the future.

You can switch to the prior checkpoint models, either gpt-3.5-turbo-0301 or gpt-4-0314 and see how they perform on the task, but at least the former was also hit with similar degradation in performance despite the promise that it was going to “remain” through June 2024.

1 Like

Thanks for the help! But we actually do use the Function calling feature. This is a screenshot of our prompt engineering tool And from the API calls we can see that we actually do call the “create_deck” function.

I found the issue!

So apparently openAI doesn’t want you to use ENUMs in your function when you want to return an integer/number.

Let’s see if you can find the differences from the screenshot below (should be fairly easy given the highlighting):

None of the changes in my prompt worked and resulted in a correct JSON format, until I changed the ‘functions’ parameter and removed the ENUM from ‘points’. Then, all of the JSONs were correct and gave me actual integers as ‘points’ instead of strings.

I’ve found that prompting or talking about the function feature will actually degrade the quality or when it is appropriately called.

By function name and parameter property and name, and description for each, the purpose of the function and what data is going to be returned when it is invoke should be crystal-clear.

You can dump the whole function JSON here and I can see if there is something that makes it worse, like nested descriptions the AI would never receive.

But, yes, they probably broke GPT-4 if it worked and now doesn’t work. Same AI that can’t properly use its DALL-E function in ChatGPT? There’s no other function-calling model to use unless you sign up for Azure and hope that these continued hits to AI intelligence are delayed there.

1 Like

Yes, enum are not rejected but not passed to the AI on anything but strings.

Here I go deep into the json datatypes that work:

1 Like

Yeah I think it’s super weird that the exact same API call suddenly results in errors/bad JSONs. But thanks for the explanation! Strange that they let you use/pass ENUMs for fields that are not strings and it still worked before. It’s only broken since yesterday and I did not see any big updates or releases!