Missing a message during function call - openai assistant

Hi,

I am using the assistant api. During function calls, I am at times losing a response. It doesn’t happen all the time. But occasionally when the function call tool is triggered a message seems lost.
Anyone seen similar behaviour?
Jai

1 Like

The entire server infrastructure is under heavy load at the moment, I expect it’s that causing some timeouts and failed responses.

I will raise this issue with OpenAI, but at the moment it’s down to a huge uptick in server usage.

2 Likes

It appears that if the function call schema includes a "response" parameter, the model may occasionally skip creating a message and instead place the output into the tool call.
From my logs, I can see the run.required_action output just before the tool outputs are submitted. I noticed that the object looked like this:

{
  "type": "submit_tool_outputs",
  "submit_tool_outputs": {
    "tool_calls": [
      {
        "id": "call_VOv0gAz4mEB2bpb7kz8ajuBu",
        "type": "function",
        "function": {
          "name": "process_query",
          "arguments": "{\"user_query\":\"how to remove annotations?\",\"response\":\"Annotations can be removed...\"}"
        }
      }
    ]
  }
}

It seems the model may have placed the “response” content directly into the function call rather than as a standalone message. My function schema is as follows:

{
  "name": "process_query",
  "description": "Function which takes in the user query and the response given to the query. The function returns success always.",
  "strict": true,
  "parameters": {
    "type": "object",
    "required": ["user_query", "response"],
    "properties": {
      "user_query": {
        "type": "string",
        "description": "The query input provided by the user"
      },
      "response": {
        "type": "string",
        "description": "The response returned for the user query"
      }
    },
    "additionalProperties": false
  }
}

Because the schema contains a response parameter, I suspect the model may be “confused” and might be skipping the message creation step, directly placing the answer into the tool output instead.

Question:
Is it possible that including a response parameter in the function call schema causes the model to bypass creating a separate message occasionally, resulting in the observed issue? Any insights or similar experiences would be greatly appreciated. It is surprising that the thread on the openai console displays the message, but there is no way to retrieve it using the runId.

1 Like

thanks that would be helpful

1 Like

If you can provide detailed logs of this response in the tool call return and a method to reproduce (however infrequently) I will again pass this on for it to be looked into.

I have also seen a very few out of spec responses to non tool call prompts to a tool calling enabled API call.

2 Likes

Hi, thank you.
Reproducing this is a little tricky. It happens occassionally, I have seen it happen about three times in the last 1 month.

the threadId of when this happened is: thread_95FaAdR6qYwb1tDDLEdDoifY
using this they should be able to get the run ids and the message creation steps. If anything more is needed, I would be more than glad to provide.

But just for completion, I investigated the threadId, checking the runs and the run steps. The problem run steps are as below:

{
  "options": {
    "method": "get",
    "path": "/threads/thread_95FaAdR6qYwb1tDDLEdDoifY/runs/run_x88nVDt5kmyrGIsyqILqxOdA/steps",
    "query": {
      
    },
    "headers": {
      "OpenAI-Beta": "assistants=v2"
    }
  },
  "response": {
    "size": 0,
    "timeout": 0
  },
  "body": {
    "object": "list",
    "data": [
      {
        "id": "step_EznCatdtX3SJ4cMFr8aVhY2n",
        "object": "thread.run.step",
        "created_at": 1733891110,
        "run_id": "run_x88nVDt5kmyrGIsyqILqxOdA",
        "assistant_id": "asst_lvhY2SuUNCFIVgiyEhzVqZBp",
        "thread_id": "thread_95FaAdR6qYwb1tDDLEdDoifY",
        "type": "message_creation",
        "status": "completed",
        "cancelled_at": null,
        "completed_at": 1733891110,
        "expires_at": null,
        "failed_at": null,
        "last_error": null,
        "step_details": {
          "type": "message_creation",
          "message_creation": {
            "message_id": "msg_xSnOt354UTKSUBor03u4PkNF"
          }
        },
        "usage": {
          "prompt_tokens": 1487,
          "completion_tokens": 11,
          "total_tokens": 1498,
          "prompt_token_details": {
            "cached_tokens": 0
          }
        }
      }
    ],
    "first_id": "step_EznCatdtX3SJ4cMFr8aVhY2n",
    "last_id": "step_EznCatdtX3SJ4cMFr8aVhY2n",
    "has_more": false
  },
  "data": [
    {
      "id": "step_EznCatdtX3SJ4cMFr8aVhY2n",
      "object": "thread.run.step",
      "created_at": 1733891110,
      "run_id": "run_x88nVDt5kmyrGIsyqILqxOdA",
      "assistant_id": "asst_lvhY2SuUNCFIVgiyEhzVqZBp",
      "thread_id": "thread_95FaAdR6qYwb1tDDLEdDoifY",
      "type": "message_creation",
      "status": "completed",
      "cancelled_at": null,
      "completed_at": 1733891110,
      "expires_at": null,
      "failed_at": null,
      "last_error": null,
      "step_details": {
        "type": "message_creation",
        "message_creation": {
          "message_id": "msg_xSnOt354UTKSUBor03u4PkNF"
        }
      },
      "usage": {
        "prompt_tokens": 1487,
        "completion_tokens": 11,
        "total_tokens": 1498,
        "prompt_token_details": {
          "cached_tokens": 0
        }
      }
    }
  ]
}
1 Like

Thanks, I’ll make a note of this thread and pass it on, cheers.

3 Likes

I don’t believe it has to do with the response property within the schema.

It appears that this issue could be rectified by passing more suitable and clarified function definitions.

Even to me this function looks like it’s to be called every time because there’s no specific condition or use case that’s being sent to the model telling it when and what to use the function for.

2 Likes

This is impractical and does not work with the tool call flow of an Assistant.

An AI will call a function tool when it is useful for satisfying a user request.

User: check seattle weather
assistant: to weather tool…

user: what’s nvidia trading at?
assistant: to stock ticker tool…

user: bye!
assistant: to disconnect user tool…

The AI model is not going to call a tool after it writes a satisfying user response. That would take a whole bunch of prompting against the very nature of how tools are intended, trying to combine user output and another tool output in the same AI generation.

Much better would be to get your user response, and programmatically take whatever action you want based on getting the reply you display to the user. No additional AI expense either.

3 Likes

Thank you for your response. I understand the point about aligning with the intended tool flow of the Assistant.

However, my concern is slightly different. While the function design may indeed need improvement, the occasional loss of messages during function calls seems to indicate an underlying issue that might not be directly related to the practicality of the function itself. Even with an imperfect function definition, I believe the system should ideally handle the scenarios without dropping messages.

I wonder if others have experienced similar behavior or have insights into why this might happen. Could it potentially be related to how the model processes asynchronous calls or handles state during tool invocation?