Incorrect response with AiAssistant calls function

On occasion my AI assistant function gets called with bad json arguments, its coming through like this, with the arguments embedded inside another object,

Example of bad arguments

                  "arguments":"{\n  \"recipient_name\": \"functions.get_answer\",\n  \"parameters\": {\n    \"QuestionId\": 29537,\n    \"Reasoning\": \"The summary says xxxxx.\",\n    \"Answer\": [\"3 or less\"],\n    \"Question\": \"xxxxxxxxxxxx?\"\n  }\n}"

Normally it would come through like this

"required_action": {
    "type": "submit_tool_outputs",
    "submit_tool_outputs": {
      "tool_calls": [
          "id": "call_dGXJ3hlFeH5gc2mf0cti6nRh",
          "type": "function",
          "function": {
            "name": "get_answer",
            "arguments": "{\n  \"QuestionId\": 29537,\n  \"Reasoning\": \"xxxxxxxxxxxxx\",\n  \"Answer\": [\"3 or less\"],\n  \"Question\": \"xxxxxxxxx\"\n}"

My assistant function is like this

  "name": "get_answer",
  "description": "Look at the data summary, try and answer the question",
  "parameters": {
    "type": "object",
    "properties": {
      "QuestionId": {
        "type": "integer",
        "description": "The Question ID, the numeric ID only."
      "Reasoning": {
        "type": "string",
        "description": "Tell me your reasoning as to why you have selected the answer your did"
      "Answer": {
        "type": "array",
        "items": {
          "type": "string"
        "description": "List of answers single or multiple as specified, applicable to the questionID being asked, any Date answers must be full dates in dd MMM yyyy if a date is only partial, then make it 1st of that month, if there is no month and only a year make the month January"
      "Question": {
        "type": "string",
        "description": "The Question being asked escape any double quotes or anything that might break the JSON format"
    "required": [

I saw on another post you can return

 {"status": "error", "message": "invalid parameters"}

So im doing that, just concerned that could end up in a forever loop, if it kept sending the wrong JSON in the arguments

I see a number of flaws with the idea behind your function. Deviating from how functions are typically used will also deviate from the tuning on how to emit functions.

A function is not for output. Especially in Assistants, having the Assistant write an output to be read by a user as a function instead of a direct response basically leaves a thread useless: what is it supposed to do with a return value from the function, write more tools as output?

A function should augment the AI with an external action it can perform on the real world, or an external operation it can execute to receive back information.

The main description should include the purpose of the function, what powers the function after something is sent, and what the expected return value is. Additionally, you can include circumstances where invocation is helpful to fulfill particular user inputs.

I do not see that.

Secondly, let’s talk about the order of the text that would actually be produced by the AI as it progresses through generating language.

You have:

  1. question ID,
  2. reason why the given answer was selected
  3. an array of answer strings
  4. the question

After ID, the question would be best produced first, so the AI focuses its attention without having to look far back into context.

Having the AI discuss reasoning, perhaps discussing answer candidates and their merits, is a strong way to get correct answers. However, you ask the AI to reason about an answer it has not yet produced.

I have a feeling that if you improve the orientation of description and parameter descriptions, the AI will have more clarity and make less errors in ultimately structuring its output, or in this case, not repeating function names within the arguments.

Reducing top_p to be under the default value of 1.0 can reduce “wrong” tokens being written, only producing the most likely ones when set low.

Then, finally, the AI is ultimately confused by OpenAI’s poor implementation of parallel tool calls, where it has to repeat functions into a tool that is a wrapper. There is a run parameter now that allows this to be turned off. "parallel_tool_calls": false

I hope this analysis helps.

1 Like

Thanks very much for that. quite new at this, so good to get a different perspective.

I had started with doing JSON returns by specifying the JSON in the prompt, which was only about 80% reliable.

I figured functions could be used to structure it better, its not really a chat type response I’m after, more a looking in a document and having it ask questions of that document and send me back the answers.

Ill keep playing.