Using function calling to simulate multi-person conversation

I’ve had quite a bit of success on 3.5 passing a mandated “respond-many” function that accepts an array of objects each with a “speaker” string property and a “dialog” string property, to stimulate a user being in a group chat with multiple AI personas (introduced in an earlier message in the conversation history).

Thus far, it doesn’t feel obligated to respond from everyone present if a single response will do, and it’s quite willing to send back conversations between two AI personas in the sort of way you expect when asking it to write a screenplay or similar.

However, I can’t quite shake the feeling that the intended use case for function calling is that it’s an intermediate step in what the AI “expects” to do, rather than a termination of its work, and I’m curious as to whether anyone has discovered any gotchas along the way where the intended use case for function calling either affects how it’s presented in the system message, or how the training has been carried out, in a way that makes it problematic to use for this purpose.

Each time dialog is generated it’s added to a separate “chat history” string which is sent in a single message as part of each new prompt, so with each invocation there’s never any implication that the AI has called the function before, it’s just responding fresh to the conversation history as a single input each time, and that’s one of the things that worries me - invoking the API, telling the AI that it always responds via function calling, but appearing then to give it evidence that it doesn’t…

So yeah, any gotchas or issues I should be steering clear of in this area?

The only thing I’d say is, don’t forget that despite being cloaked in a function call, everything is still just text being passed to the model for evaluation, there is no special extra things going on, you can achieve the same thing by listing out all of the function options and telling the model you’d like it to show you what a function call would look like, so just keep in mind that it’s all text and if that block of text seems to make sense, the model should process it just fine, but trying to do “software” tricks and expecting them to work on a language model, usually won’t.

If you just want to simulate conversation between multiple persona, you can achieve it without using function calling.

Try this, using system prompt

const persona_list = [
	{ name: "Robert", description: "business owner, capitalist" },
	{ name: "Jake", description: "professor, socialist" },
	{ name: "Mike", description: "farmer, libertarian" }
]

const persona_names = persona_list.map((item) => `${item.name}`).join(',')
const persona_descriptions = persona_list.map((item) => `${item.name}: ${item.description}.`).join('\n')

const system_prompt = `We will simulate a discussion between different personas in [${persona_names}].\n` +
            `You will respond to the subject of inquiry based on these personas.\n` +
            `Most of the time your responses should be a sentence or two per persona.\n` +
            `The following are the descriptions of each personas:\n` + persona_descriptions

Here is a sample conversation, you can let them all reply, talk to only one or two:

Me: What do guys think about implementing UBI as AI is supposedly poised to take our jobs?
Response:

{
  role: 'assistant',
  content: 'Robert: UBI may seem like a good idea to address job displacement, but it could create a dependency on the government and hinder productivity.\n' +
    'Jake: UBI could be a crucial tool in ensuring that everyone benefits from technological advancements and the potential loss of jobs. It could create a more equitable society.\n' +
    "Mike: UBI might be a temporary solution, but it doesn't address the root issue. Instead, we should focus on fostering entrepreneurship and creating an environment where individuals can thrive."
}

Me: Robert, I am surprised that you seem to be a bit negative about it.
Response:

{
  role: 'assistant',
  content: "Robert: As a business owner, I understand the potential benefits of UBI, but I also have concerns about the long-term consequences. It's important to consider both the short-term assistance it provides and the potential impact on individual motivation and productivity."
}

Me: Do you guys agree on what Robert just said?
Response:

{
  role: 'assistant',
  content: "Jake: While I understand Robert's concerns, I believe that UBI can be structured in a way that addresses those potential issues. It's crucial to find a balance between providing necessary support and ensuring individuals are motivated to contribute to society.\n" +
    'Mike: I agree with Robert to some extent. UBI may have unintended consequences such as disincentivizing work. Instead, we should focus on creating an environment that encourages entrepreneurship and innovation.'
}

You can probably extend the system prompt to output JSON if you do not want to parse string.

Also remove the line, Most of the time your responses should be a sentence or two per persona., if you want them to talk a lot (they sure do talk a lot lol).

1 Like

Hmmm, that makes me wonder, as the “roles” are just text strings, if you could try changing the “user” role to “User : Sam” and “User : Greg” :thinking:

The difference that I’ve presumed here is that the model has been specifically trained to return parseable JSON in response to an instruction to call a function, whereas simply asking it to return JSON hasn’t, for me at least, been as reliable as I would like…

I think you could still use the functions role, just make that minor modification to the user role with the addition of names, it seems like something the model would pick up on, it’s all just text at the end of the day.