New API changes on Chat Completion broke some previous working stuff

Dear OpenAI Community,

I’m writing to share and seek advice on a specific issue we’ve encountered with the recent changes to the Chat Completion API, which has significantly affected our application’s functionality.

Background:

Our application heavily relies on the Chat Completion API, particularly a feature where certain function calls would initially return a status message like in progress. This was integral to our user experience, as it informed users that their request was being processed. The function would later update with the final result, allowing for a seamless and interactive user experience.

Issue:

The recent API updates have introduced a specific requirement that a tool call response must directly follow the message where it was initiated. While this is manageable, it disrupts our previous method of providing an early ‘in progress’ message, followed by the actual tool response a few messages later. This approach, which involved issuing a tool message with the final result after non-tool call messages, is no longer viable under the current API constraints.

Invalid parameter: messages with role 'tool' must be a response to a preceding message with 'tool_calls'.

This change means we cannot notify users that their request is in progress until after we have made a tool call, which undermines the interactive nature of our application.

Impact:

The inability to provide immediate status updates has led to a less responsive user experience. Our users were accustomed to receiving instant feedback, and this change has made our application feel slower and less interactive.

Request for Advice:

We are seeking advice or suggestions on how to adapt to these changes. Specifically:

  1. Is there a recommended approach to provide immediate feedback under the new system?
  2. Are there alternative methods to implement a similar functionality that complies with the new API rules?
  3. Any insights into whether this change is permanent would also be greatly appreciated.

We hope to find a solution that allows us to maintain the level of interactivity and responsiveness our users have come to expect. Any guidance from the OpenAI team or community members who have faced similar challenges would be invaluable.

Thank you for your attention and assistance.

4 Likes

Hey @aurimasniekis! Have you found a solution to this issue yet?

1 Like

I resolved it by just adding

{
       tool_calls: toolCalls,
       role: "assistant",
},

before the message with role=“tool”

1 Like

@e.cubillas23 this solves it, but I’m baffled by the flow:

first I’m pushing the tool calls like you suggested (multiple tool calls)
but afterward i need to push a single tool’s response?

messages.push({
            role: 'tool',
            tool_call_id: tool.id,
            content: responseBack,
          });

either i need to add {assistant : tool call} , and then {tool: reponse} one at a time for all tool calls OR i need to be able to add all tools calls and all responses.
What am I missing here?

1 Like

My tool calls end up looking like this:

[{'role': 'user', 'content': 'how many bathrooms are in listing 1? '},
 {'role': 'assistant', 'content': [ChatCompletionMessageToolCall(id='call_bPuBwOgwyh2MpImzglS9JK4a', function=Function(arguments='{"id":1}', name='fetch_listing_data'), type='function')]}, {'role': 'assistant', 'content': 'I cannot find the answer. Let me Look it up for you.'},
 {'tool_call_id': 'call_bPuBwOgwyh2MpImzglS9JK4a', 'role': 'tool', 'name': 'fetch_listing_data', 'content': '1'}]```

Which ends up giving this obvious error: 

Invalid value: ‘function’. Supported values are: ‘text’ and ‘image_url’


What is the format of tool_calls or the string you ended up passing?
1 Like

tool_calls objects should be like this

{'role': 'assistant', 'tool_calls': [{'id': 'tool_call_id', 'function': {'arguments': '{"arg1": "arg_Value"}', 'name': 'function_name'}, 'type': 'function'}]}}

1 Like