Thanks for the explanation, but what is this in the example?
These are certainly not the results of tools_call, nor the message returned by the LLM. As you can see, tool_call.function is about two levels down from response.choice[0].message, should we simply say:
conversation_messages.append(response.choices[0].message)
?
-------------The following are from the example in:
https//cookbook.openai.com/examples/structured_outputs_multi_agent
conversation_messages.append([tool_call.function for tool_call in response.choices[0].message.tool_calls])
The examples seems I need to do something extra besides what you said:
Function or Tool is executed, result of that function/tool call is appended to the existing prompt chain