Creating Sequential User Flows for Function Calling

Dear OpenAI Community,

I’m seeking assistance with a question about creating user flows for function calling. Specifically, if there’s a method to establish a sequence where, if a user triggers “flow1,” functions like function1, function2, and function3 are executed sequentially. The idea is that function1 runs first, followed by function2 if the user provides the correct input, and so forth.

I’m curious if anyone has experience implementing such a system or has ideas on how this could be achieved.

Thanks in advance.

Hey there!

Perhaps treating it like a wrapper function might help here? Essentially, “flow1” is treated as its own single function, executing the functions inside it in sequential order. That’s the answer that’s at the top of my head at least.

2 Likes

I think it depends on a lot of details, specifically when running in a multi-threaded,multi-user environment.
So maybe we can better help with some more details/examples. You will typically have a different time-out limits in different components - and then there is the ‘multi-threaded’ aspect where you need to be chaining things properly.

I do have a lot of experience using Assistants muliti-user / multi function and I have been able to solve most of what you might be talking about with prompting.
Curious to learn more

2 Likes

I was also thinking of something similar, but I also think this is not full proof and for crucial flows hallucinations are needed to be handled more efficiently.

for example, a booking flow: → function1: make a search, and return search results to the user. → Once the user selects an option, it triggers function2: asks the user give their details, once these details are matched → function3 is triggered, otherwise we ask the users to retry with correct details.

Also, I didn’t actually think about the multi-user scenario, will that cause some inconsistency too?
I am pretty new to this LLM stuff, sorry if I sound naive.

Your example is perfect for something that the Assistant should be able to handle all by itself. You provide it well documented functions (seach for example) and ‘book’. Book has wel documented fields it needs from the user.

Your prompt will guide the GPT through the process. You are a travel booking assistant. First you will have to determine the destination the user is interested in. Then you will you <> to present them options. Once the user has choosen an option you will ask for more detailsl…
etc…

THis has to be very detailed and specific but should be no problem in terms of ‘flow’.

Thanks for your inputs, I could not understand your approach properly. I can define function like this:

def search_and_book(s, d, d1):

    def search_call(s, d, d1, i):
        # search_call implementation
    
    def booking_call(idf, a):
        # booking_call implementation
   
 # search_and_book implementation

But how can I make the model call the internal functions, I mean to say that the model needs to know the output from the search call and then send the user a message asking details for the booking call, do I need to define these instructions in prompts? That way prompt will become pretty lengthy in a couple of flows only.

Apologies if I understood you incorrectly.

That is all a matter of the assistant prompt. I gave a beginning of what you would put in the Assistant instructions:

You are a travel booking assistant. First you will have to determine the destination the user is interested in. Then you will you <> to present them options. Once the user has choosen an option you will ask for more detailsl…
etc…

And so yes you would define those functions and provide the function template that tells what parameters GPT needs to give you. Follow the Assistant play book it has an example for function calling as well,

https://platform.openai.com/docs/assistants/tools/function-calling

I am having a similar issue. I want to be able to ask a complex question where gpt has to use a tool and use the results of the tool to submit some form. but there are only ways to call multiple tools parallely not sequentially.

Example: I want to have gpt look up database schema only when needed and then act on the schema with a generated sql statement. Then have gpt look at the results and provide a human readable output. This sequential flow could be useful for multiple use cases. I don’t want to tell the assistant about the flow of a process, because that would limit the flexibility of the assistant.

Am I overcomplicating it?

I’ve build this and is it as easy as explaining it to GPT as you did to us.

1 Like