I’ve had quite a bit of success on 3.5 passing a mandated “respond-many” function that accepts an array of objects each with a “speaker” string property and a “dialog” string property, to stimulate a user being in a group chat with multiple AI personas (introduced in an earlier message in the conversation history).
Thus far, it doesn’t feel obligated to respond from everyone present if a single response will do, and it’s quite willing to send back conversations between two AI personas in the sort of way you expect when asking it to write a screenplay or similar.
However, I can’t quite shake the feeling that the intended use case for function calling is that it’s an intermediate step in what the AI “expects” to do, rather than a termination of its work, and I’m curious as to whether anyone has discovered any gotchas along the way where the intended use case for function calling either affects how it’s presented in the system message, or how the training has been carried out, in a way that makes it problematic to use for this purpose.
Each time dialog is generated it’s added to a separate “chat history” string which is sent in a single message as part of each new prompt, so with each invocation there’s never any implication that the AI has called the function before, it’s just responding fresh to the conversation history as a single input each time, and that’s one of the things that worries me - invoking the API, telling the AI that it always responds via function calling, but appearing then to give it evidence that it doesn’t…
So yeah, any gotchas or issues I should be steering clear of in this area?