Multiple function detection in the GPT API

Hello all

I am currently working with the GPT API and have a specific requirement. I would like the bot to be able to recognize multiple functions in a single prompt and then execute each function call individually in a new API call. I use function calls.

For example, if I send a prompt like “Play me song xyz and then create me a word file about xyz”, I want the bot to recognize both functions and then execute them in two separate API calls.

However, I don’t want to create a function that executes multiple functions. I want that I can combine my functions as I want no matter what order.

I look forward to your feedback and thank you in advance for your assistance.

Greetings Even

1 Like

What you describe is the natural iterative process of the function-enabled AI.

You could say “find a cute cat picture from my instagram and post to my twitter”, and if it had functions where it could perform those tasks, it would use their description to figure out what to do step by step. The only thing the software must do is keep a “conversation history” that continues with the past chat and user question along with a function role return for the results of each call.

What is the best way to solve this? Here is a script excerpt:

def get_openai_response(prompt):
    add_to_message_history("user", prompt) 
    
    messages = [{"role": "system", "content": get_chatbot_description()}]
    messages.extend(message_history)
    messages.append({"role": "user", "content": prompt})

    functions = [
    ]

    try:
        response = openai.ChatCompletion.create(
            model=os.getenv('OPENAI_MODEL'),
            messages=messages,
            functions=functions,
            function_call="auto",
            max_tokens=int(os.getenv('OPENAI_MAX_TOKENS')),
            temperature=float(os.getenv('OPENAI_TEMPERATURE'))
        )

        response_message = response["choices"][0]["message"]

        if response_message.get("function_call"):
            available_functions = {
                "exit_program": exit_program,
                "spotify_control": spotify_control,
                "calendar_control": calendar_control,
                "system_control": system_control,
                "HomeLights": HomeLights,
            }
            function_name = response_message["function_call"]["name"]
            
            function_to_call = available_functions.get(function_name)

            if function_to_call:
                arguments_str = response_message["function_call"].get("arguments", "{}")
                function_parameters = json.loads(arguments_str)
                
                function_parameters["prompt"] = prompt
                function_response = function_to_call(**function_parameters)
                
                add_to_message_history("assistant", function_response)
                return function_response
            else:
                return f"Function '{function_name}' not found."
        else:
            response_content = response_message["content"]
            cleaned_response = remove_consecutive_duplicates(response_content)
            add_to_message_history("assistant", cleaned_response)
            return cleaned_response

    except Exception as e:
        print(f"[ERROR] Error in get_openai_response: {e}")
        return "Sorry, there was an error processing your request."
2 Likes

The correct way to add a function return to the chat history, below the question most recently asked, is with the “function” role message, requiring a “name” parameter with the function name that is returning the data.

One must keep a consecutive chat history of all functions the AI has called, at least until the AI finally answers the user.

We are not given any guidance, but I would put a pair of assistant/function roles in history for each return, where the assistant is written like <function_name>(<function_call_json>). That way AI also can see what it tried and won’t repeat the same mistakes.

This example from OpenAI documentation shows putting the whole API response “message” in instead of simulating the language the AI actually produced:

messages=[
                {"role": "user", "content": "What is the weather like in boston?"},
                message,
                {
                    "role": "function",
                    "name": function_name,
                    "content": function_response,
                },
            ],
1 Like

I answered almost exactly this question just a few days ago.
Is this a school assignment or something?
No, wait … I answered it for you.

The answer is still the same. ChatGPT can only call one function at a time.
If you look at the API, there is literally no way it could call more than one function in a single invocation. There’s no space to return them.

In the previous answer, I described exactly how you could use chain of thought prompting to implement exactly what you’re suggesting.

2 Likes

Thank you this has helped me a lot. As you can see below I created a short test function so you know how the others look like. Normally I have multiple functions but to show you that there are strange results and reactions from the bot when I trigger multiple functions. The get_dynamic_reaction reacts on the functions what was executed. if i say open me word and turn on my light then it executes it but the reaction seems to be weird because the bot interprets it like wrong… What do you suggest to make it clearer?

def exit_program(prompt=None):
    reaction = get_dynamic_reaction("exit_program", prompt)
    speak(reaction)
    for win in tk._default_root.winfo_children():
        win.destroy()
    tk._default_root.quit()
    tk._default_root.destroy()
    sys.exit()


def get_dynamic_reaction(action, prompt, details=None):
    prompt_text = f"Provide an appropriate response after performing the action '{action}' with the context: {prompt}"
    
    if details:
        prompt_text += f". Additional details: {details}."

    messages = [{"role": "system", "content": get_chatbot_description()}]
    messages.extend(message_history)
    messages.append({"role": "user", "content": prompt_text})
    
    response = openai.ChatCompletion.create(
        model="gpt-3.5-turbo",
        messages=messages,
        max_tokens=250
    )
    
    response_content = response.choices[0].message["content"].strip()
    
    
    print(f"get_dynamic_reaction output: {response_content}")
    
    add_to_message_history("assistant", response_content)
    return response_content



def get_openai_response(prompt):
    add_to_message_history("user", prompt) 
    
    messages = [{"role": "system", "content": get_chatbot_description()}]
    messages.extend(message_history)
    messages.append({"role": "user", "content": prompt})

    functions = [
        {"name": "exit_program", "description": "Terminate the program", "parameters": {"type": "object", "properties": {}}},
   ]

    while True:
        try:
            response = openai.ChatCompletion.create(
                model=os.getenv('OPENAI_MODEL'),
                messages=messages,
                functions=functions,
                function_call="auto",
                max_tokens=int(os.getenv('OPENAI_MAX_TOKENS')),
                temperature=float(os.getenv('OPENAI_TEMPERATURE'))
            )

            response_message = response["choices"][0]["message"]

            if response_message.get("function_call"):
                available_functions = {
                    "exit_program": exit_program,
                    "spotify_control": spotify_control,
                    "calendar_control": calendar_control,
                    "system_control": system_control,
                    "HomeLights": HomeLights,
                }
                function_name = response_message["function_call"]["name"]
                
                function_to_call = available_functions.get(function_name)

                if function_to_call:
                    arguments_str = response_message["function_call"].get("arguments", "{}")
                    function_parameters = json.loads(arguments_str)
                    
                    function_parameters["prompt"] = prompt
                    function_response = function_to_call(**function_parameters)
                  
                    messages.append({
                        "role": "assistant",
                        "content": f"{function_name}({arguments_str})"
                    })
                    messages.append({
                        "role": "function",
                        "name": function_name,
                        "content": function_response
                    })

                    continue
                else:
                    return f"Function '{function_name}' not found."
            else:
                response_content = response_message["content"]
                cleaned_response = remove_consecutive_duplicates(response_content)
                add_to_message_history("assistant", cleaned_response)
                return cleaned_response

        except Exception as e:
            print(f"[ERROR] Error in get_openai_response: {e}")
            return "Sorry, there was an error processing your request."