Emulated multi-function calls within one request

They are not documented, but they are discovered.

and an example of my extraction of someone’s very bad function:

1 Like

@PriNova

Thanks for sharing this, works like a charm. Also I didn’t know that you can embed properties inside properties, neat stuff.

1 Like

My advice is don’t do it because it’s not scalable and you will get inconsistent results making it much harder to debug.

Adding multiple calls to a single prompt will only cause the model to lose focus and increase the chance of hallucinating function input values.

This means that to get a more accurate response you need to use a more powerful model like GPT4 instead of 3.5 and potentially use one with a larger context window (if you have lot of calls to make).
This results in increased cost and slower response times.

Imagine if you have to execute 10 function calls to get the answer you want and they are all crammed into the same prompt and then the model chokes on the 10th function call and produces malformed JSON or it simply runs out of context leaving you with a half written response. In that scenario you have to send all 10 function calls back to the model to be reprocessed.

A better approach if you want a faster response for multiple calls is to put each call in its own prompt/completion and execute the http requests in parallel.

That way each function call is given the models full attention and you can use a model with a smaller context window and fewer parameters. It also means that you only have to send 1 function call back to the model if the response is malformed.
So it should work out cheaper, with more consistent results and be faster while also being easier to debug.

Then I would say the principle of separating concerns/responsibilities is not fulfilled. I assume, this is why in the ChatGPT web app you can only choose 3 plugins simultaneously.

I solved it by providing examples. However, I’m still wondering how the example I pass can have an effect on the content generated. Since I’m pretty sure that it’s not only influencing the format.

Since the API change to call multiple function calls https://platform.openai.com/docs/guides/function-calling, this thread is marked as solved.