Function calls - multiple function calls to same function?

Is it possible to get the model to use a function multiple times?

like this:

user: write a list of 3 fruits and send each word to the juiceproducer function

juiceproducer function has an argument “fruit” obviously.

I assume you mean more than once in the one API call?

I guess you could have a “loop count” parameter returned by the model and get the code to run it that many times?

I thought because the function call is an array there should be a way to get something like:

[functionname: juiceproducer
arguments: (fruit: apple)],

[functionname: juiceproducer
arguments: (fruit: banana)]

I just can’t prompt it to do that…

Have you tried giving it that as an example? Just wonder is one shotting it might work?

Tried that too. Also copied the juiceproducer and asked to give banana to jp1 and apple to jp2.

Did not work either.

I went on and now making multiple single requests with one fruit at a time…
Ok, not really fruits. That’s just an example, but should work the same.
I guess it is even faster cause of parallel requests.

Requests already take 2.5 min. sometimes. I guess it is time to not just look on cheapest requests.

Yea, could be a trained in restriction to prevent looping abuse I guess.

I don’t see a reason for that at all. I can just create a json in message…
So what are functions good for anyways.

You could alter the function definition to accept an array as an argument instead of a string.


As I see it the primary purpose is to standardise output, anything over an above that is a bonus.

Many of my old prompts used json outputs to structure data in an easy to parse way, but there was occasionally variation in either the json itself or some of the formatting, functions have removed that variation.

I’ll tell you what I do these days, I ask the GPT-4 what the function definition should look like for a given task, after feeding it the function API docs, works a treat.

1 Like

I think I just got the wrong expectations of functions.

Kind of dissapointed of myself for not even reading the docs for functions earlier.

I think I will just continue to call for multiple json in the prompt so i can use json in streaming mode.

{ “fruit”: “apple”, “function”: “juiceproducer” }

==! ← stream has send this? take the previous json and start working

{ “fruit”: “banana”, “function”: “juiceproducer” }

==! ← stream has send this? take the previous json and start working

{ “fruit”: “tomato”, “function”: “juiceproducer” }

==! ← stream has send this? take the previous json and start working…

The prompt where you tell the AI to send things to a function is also destined to fail.

  • Good: “What are some smoothies that use each of raspberries, watermelon, kiwi?”
  • Bad: “Using the find_juice_recipe function, make three API calls that iterate through the python list [“raspberry”, “watermelon”, “kiwi”] as parameter properties…”

The AI must decide the function is useful. It will look at the return value and other chat history returns you’ve recorded decide if it needs to run more function calls to continue enhancing knowledge until it can answer.

I will try that after the GPT-7 release.
For now I don’t let it decide anything.