Example of human-in-the-loop with multiple tool_calls?

Does anybody have an example of the Chat Completions API with both human in the loop and tools?

I’m specifically looking for an example when the model returns multiple tool_calls, and user rejects one / accepts another. I’m not sure how to structure the message history.

Thanks!!

1 Like

I can’t think of many cases for this. Maybe if a tool call was emitted for both Google and Bing, the results were displayed to the user, and the best one returned. You’d likely not be presenting tool query language to user.

The most straightforward way I can see is just to return with an assistant role message with the single tool selected as the one that appears called, and the return and ID of that selected. That way the task looks done and there is no AI that persists in trying to get the other requested tool.

I would think that ideally your prompt would prevent that from happening? Your creating an assistant with a description of what it should do and what tools (functions) it has in its toolkit. You prompt (and the descriptions in your functions) should ideally make it fairly unambigous which tools(S) get called? The user is not part of accept or reject, becasue the Assistant hasn’t even formulated the answer when the function(s) get called. As part of the tuning you might be in that situation and you might be able to tune things with better mandatory parameter settings, better explanation of which tools does what etc. But you don’t even know HOW the assistant is going to use the results you are about to feedback until it has presented you with the response?

The way the tool call api works is that you must return a tool call response to a particular call if it’s in the assistant’s chat history. However, if your user decides not to use a particular tool that was called, then you can simply reconstruct the history of messages as if the model never suggested the declined tool call. The main downside from this though is that the model might start only to suggest one instead of multiple tool calls in a given response given in its history it sees only single calls. Alternatively you could have a [NULL, NOT CHOSEN] response to a given tool call if you still wanted to keep track of the fact the chat bot decided to choose that particular set of tools.

Check out Example invoking multiple function calls in one response in the link below for manually constructing tool responses:
https://platform.openai.com/docs/guides/function-calling

1 Like

Thanks @henrye!

I ended up replacing the assistant message that contains tool_calls with denial message:

"You proposed the following tool_calls, but I denied them. \n\nTools: {tools_str}\n\nReason: {reason}".format(tools_str=tools_str, reason=reason)

I had to make this message a user message. It seemed to be ignored as a system message.

I also get the sense that the new tools API was really not designed with human-in-the-loop in mind!

Another option is to bypass the assistant and send the potential solutions directly to the chat as buttons.

ChatGPT kind of does this sometimes (it will show two different responses to select).

Then the user can select the button, which can be sent as a single tool output to GPT.