AFAIU, when using the new tools API, one is supposed to add the output of a function call as tool message (e.g. the temperature for a weather lookup) and then call the API again to let GPT formulate a human-friendly answer.
In our use case, for the majority of function calls we just reply with a fixed utterance (mainly to avoid a second round trip). Until now we just added an Assistant message after a function call. But doesnāt seem to be allowed anymore:
{
"error": {
"message": "An assistant message with 'tool_calls' must be followed by tool messages responding to each 'tool_call_id'. The following tool_call_ids did not have response messages: call_xxxxxxxxxxxxx",
"type": "invalid_request_error",
"param": "messages.[3].role",
"code": null
}
}
We have two questions:
For how long will the old API (using functions instead of tools) be supported?
Can you talk more about the use case here? I am not sure I fully understand. Are you saying the issue is that you used to be able to continue the conversation while you waited to send the tool response back?
The desired use pattern is not a verbatim addition of the emitted AI language accompanied by the exact tool call result, to then produce a response to the user.
The function might have been a triggering action that alone was the desired result (imagine a function ādisplay a magic 8-ball in the GUIā)
The programmer wished to continue with just minor notation that the function was successful along with a new user input.
This also may have application in a data retrieval. A GUI can report āknowledge about the OpenAI API has been loadedā, or ātemperature set to creativeā.
With toolsā higher enforcement, the function role alone can no longer be placed. One would just be allowed a descriptive assistant role message, and thatās not even an option with the āassistantsā agent framework.
Iād also love this feature. In some cases I want the model to āreturnā a static response such as āCompleted, here is your result: [2,000 tokens]ā, instead of asking the model to parse and form itās own response
Our use case is that we dontā need a Tool Message, because we have static answers for most of our function calls or take some action in the real world. We donāt want a second round trip to formulate the answer with GPT. But the new API this doesnāt seems to be allowed anymore.
Example
This was possible with the old API:
User: Play the Movie XYZ.
Assistant-ToolCall: play_movie
Assistant: Alright here is the Movie XZY. (static answer) ā this is not allowed anymore
[Play Movie XZY]
[90 minutes later ā¦]
User: This was a nice movie. How was the director?
Assistant: The director of XZY was ā¦
Not sure if you have found a way around this - we are injecting static responses into the front end where necessary, and adding ghost entries as tool role responses followed by a fake assistant message meant to keep conversation context and get past the tool_call_id handshake.