How to create an assistant message after a function call in the new tools API?

Hello!

We are trying to update our code to use the new tools API instead of the functions API (which is marked deprecated in the API reference: https://platform.openai.com/docs/api-reference/chat/create#chat-create-functions).

AFAIU, when using the new tools API, one is supposed to add the output of a function call as tool message (e.g. the temperature for a weather lookup) and then call the API again to let GPT formulate a human-friendly answer.

In our use case, for the majority of function calls we just reply with a fixed utterance (mainly to avoid a second round trip). Until now we just added an Assistant message after a function call. But doesnā€™t seem to be allowed anymore:

{
  "error": {
    "message": "An assistant message with 'tool_calls' must be followed by tool messages responding to each 'tool_call_id'. The following tool_call_ids did not have response messages: call_xxxxxxxxxxxxx",
    "type": "invalid_request_error",
    "param": "messages.[3].role",
    "code": null
  }
}

We have two questions:

  1. For how long will the old API (using functions instead of tools) be supported?
  2. What is the solution in our particular use case?

Tools requires a matching ID between the tool call and the tool return. The assistant call and the tool reply must be paired as followup messages.

This helps the AI endpoint match two parallel calls to the same function name with their respective returns.

Can you talk more about the use case here? I am not sure I fully understand. Are you saying the issue is that you used to be able to continue the conversation while you waited to send the tool response back?

Here is the excerpt of most importance:

The desired use pattern is not a verbatim addition of the emitted AI language accompanied by the exact tool call result, to then produce a response to the user.

The function might have been a triggering action that alone was the desired result (imagine a function ā€œdisplay a magic 8-ball in the GUIā€)

The programmer wished to continue with just minor notation that the function was successful along with a new user input.

This also may have application in a data retrieval. A GUI can report ā€œknowledge about the OpenAI API has been loadedā€, or ā€œtemperature set to creativeā€.

With toolsā€™ higher enforcement, the function role alone can no longer be placed. One would just be allowed a descriptive assistant role message, and thatā€™s not even an option with the ā€œassistantsā€ agent framework.

1 Like

Iā€™d also love this feature. In some cases I want the model to ā€œreturnā€ a static response such as ā€œCompleted, here is your result: [2,000 tokens]ā€, instead of asking the model to parse and form itā€™s own response

1 Like

Our use case is that we dontā€™ need a Tool Message, because we have static answers for most of our function calls or take some action in the real world. We donā€™t want a second round trip to formulate the answer with GPT. But the new API this doesnā€™t seems to be allowed anymore.

Example

This was possible with the old API:

User: Play the Movie XYZ.
Assistant-ToolCall: play_movie
Assistant: Alright here is the Movie XZY. (static answer) ā† this is not allowed anymore
[Play Movie XZY]
[90 minutes later ā€¦]
User: This was a nice movie. How was the director?
Assistant: The director of XZY was ā€¦

Not sure if you have found a way around this - we are injecting static responses into the front end where necessary, and adding ghost entries as tool role responses followed by a fake assistant message meant to keep conversation context and get past the tool_call_id handshake.