Is function calling more than multiples calls

Hi everyone !

Is function calling different than :

  1. Prompt the llm to structured output the args of the function within a certain task context.
  2. Then execute the function, with the input generated.
  3. And finally call back with the provided result to answer the initial task.

Especially with non reasoning model. For reasoning model, I would assume can run multiple time a function (is it really true ?).
If you give a task generate a random word and use the function is_in_the_list:str->bool and stop when you have find one in the list. Is he really going to do it ? lol (Of course i can code it, but as my question is more general I may still ask it).

The basic difference from what you describe:

  • The model is pretrained on calling functions in its own way, and recognizing the API “function” specification properties placed in a brief form for it.
  • The function is optional, called when it appears useful for serving or facilitating a user message’s input.

It is your code that must catch a response with a function call, and then call the model again with the additional function response return, and your code pattern can continue iterating if the model wants more function call usage instead of producing a response to the user. This does not need a “reasoning model”, as the AI has reasoned it needed to call multiple functions to get the job done (or retry a different way if the function didn’t work well).

A better “random” pattern, since AI models aren’t good at actually producing random outputs for a fixed input, is to have a function do the random part with code. A function “send this function 100 words and it will pick a random one” would actually meet your desire for some randomness when the user asks for something like that.

Thank you, this is very insightful to read that even non reasoning model can generate call until it understands (somehow) it gets the results it needed. For example exploring other args in the function calling.
This function calling seems to turn, on specific task, non reasoning model act like reasoning model.
For example if the function is is_molecule_in_the_official_doc and the task is to only extract molecule name from a text that are in the official documentation. Then a non reasoning model can try is_molecule_in_the_official_doc on different molecule name, and the feedback true or false will be then used by the llm. He might explained why he’s wrong and then the context will be enriched. It can be called reasoning.

All AI models have a degree of reasoning, or intelligence that is artificial. So-named models just have a hidden place for planning and can have multiple prompts or internal iterations run on them before they finally generate the seen output for you, output which can simply be “use function”.

The function can even be more general than your example, and the AI can decide how to use it.

function: knowledgebase keyword search
returns: knowledge passages surrounding the matched keyword query, ranked by semantics of passage.

Usage: “yes, I have 22 returns for plutonium in enrichment_1944_secret.pdf, and 3 in iran_inspections.pdf”