Executing multiple GPT functions in a single prompt

Hello OpenAI Community,

I am currently using the GPT functions for specific tasks. In particular, I have developed functions to play music and to open programs. However, what I have noticed is that I am currently only able to execute one function per prompt. This means that if I want to open Spotify and play a specific song, I have to send two separate prompts: one to open Spotify and another to play the song.

My question is: Is there a way to execute multiple functions in a single prompt without creating a complex overarching function that combines these two actions?

For instance, I would like to be able to say something like “Open Spotify for me and play the song XYZ”, and both functions should be executed sequentially.

I would appreciate any suggestions or best practices you can share!

There is not standard way to execute more than one function in a single API call, you would have to do some clever function manipulation in order to achieve it, you can search the forum for “multiple function calls”

1 Like

The API itself cannot do this. There’s only a single function invocation in each of the choices – there’s no way to return more than one invocation.
What you can do, though, is break this into multiple sections. Have the first section in fer how many separate requests there are, and then separately invoke the model for each of the sections:

What are the individual commands in the follwing text? Respond with a brief bulleted markdown list:
Open Spotify for me and play the song XYZ

* Open Spotify for me
* play the song XYZ

Now, extract the bulleted items, and execute each one with function calling turned on:

In the context of the previous request, execute the following command:
* Open Spotify for me

-> function call: open_app("name":"Spotify")
In the context of the previous request, execute the following command:
* play song XYZ

-> function call: play_song_in_app("Spotify", "XYZ")

What you are essentially proposing is that you have a prompt template and instructions such that the LLM takes input, then generates a structured YAML. Then that YAML can be parsed and turned into some sort of function call either in subsequent calls, or externally, yes?

I see this technique gaining popularity in workflow and agent functionality scripting in favor of using OpenAI’s function calling features, as the function calling implementation is just too unpredictable. If you pass in a YAML template of options, then you can get a more predictable version of your step 1 output, then parse and run as external functions without relying on GPT. You still need to validate options and sometimes fix YAML, but getting openai’s function calls to run predictably requires a similar level of validation, but comparatively more error handling.


As mentioned before, I would also consider switching from function calling to an instructional prompt that’s requesting to extract relevant info an/or eventual function arguments to a json, and work with that.
In my experience it can get tricky to engineer, but totally doable and can eventually be reliable.

I actually developed a platform for developing and testing this use case- welcome to check it out Promptotype (there’s even an example similar to your question on the landing page :))