Issue with AI Embedding Functions Calls in Text Responses

I’m building an AI assistant. The AI has a pretty hefty prompt and mostly performs well, but occasionally instead of calling a function with the expected response outlined in the documentation it simply puts the function call in the body of the reply. Here’s an example:


Let’s start with your past treatments. Here’s what we have recorded so far:

[Call functions.getTreatments with parameters: { “accident_id”: —, “treatment_status”: “past” }]

Could you please let me know if there are any other treatments you’ve had since the accident that aren’t listed here? Here are some possible treatments you might have had:

[Call functions.getTreatmentTypes]

Remember, if you’ve had…


Is there any trick to getting the AI to consistently call the function as described in the docs here: https://platform.openai.com/docs/api-reference/chat/create. Rather than outputting the calls with the reply? Or is this expected behavior?

I am using the parameters with my API call along with the function definitions and chat messages:

“model”: “gpt-4-0613”,
“top_p”: 0.1,
“seed”: 20240,
“stream”: false,

I’ve also tried the most recent preview model and seems to perform worse in my tests.

I couldn’t find anything that described this kind of response in the docs. I can probably catch and parse these, but the AI won’t be able to write the response to the function calls to interpret them for the user.

Any help or suggestions would be appreciated.

Hi Nate, I am experiencing exactly the same issue. Did you find any improvement or resolution in the meantime?