What about when the functions have different signatures?

On the function calling technique, we have this section:

  const functionResponse = functionToCall(
        functionArgs.location,
        functionArgs.unit
      );

Source

What about when the functions have different signature??:thinking::thinking:

For instance: functionArgs.unit may no exist in every functions.

The solution I see is a sequence of ifs.

Is there a more elegant solution?:thinking::thinking:

First, I strongly recommend against interpreting the “function call” API from OpenAI as literal function calls. Instead, view them as “template fillers” – when it “calls a function,” it’s really “filling in a template” about some action the model infers it needs to perform.

Second, once you have shifted your thinking to the “template filling” approach, you can use whatever mechanism you want to map “I have a bunch of templates, and they take different arguments” into taking some action in your code. A map from template (function) name to some kind of object that knows how to invoke whatever your real code needs to to, given a map of template (function) arguments, is pretty typical.

You will then “bind” your-code-implementation to template-names-with-args using those invoker objects, and pick the right invoker object using the template (function) name.

Thanks for the efford. Not sure I understood your mindset.
:grin:
Can you give me a couple of examples? it can be hypothetical.

Hi! I’ll try to dive in and give you a better picture of functions (and “tools” that use the functions method).

When we have set up a chatbot, we transform an AI, one that originally just generated likely completion text, into having AI produce language where it responds to a user. The AI only answers with its own knowledge, shaped by other language we gave as a “prompt” that comes before what the AI should output - prompt language that can be instructions of how to act, documents we want it to use when answering, a separate query after that information.

That “programming” part of how to act can be made to appear permanent and separate from individual user inputs. For example:

  • Here is a chatbot that talks like a California surfer; or
  • Here is an AI that accepts HTML code and reformats it into rich text markup.

Still giving a direct response to a user from their input.

But what if we give that AI preprogramming some options of what it can produce? We can have the response be normal, or a specially-encoded triggering message. Example instruction:

  • In most cases, AI will answer the user directly as a math tutor, but
  • If user needs answers where trigonometry calculations are necessary, a calculator is available to the AI and the AI special response can trigger the calculator. This calculator is useful for numeric answers to functions such as SIN(335)… (much more description)

That last part might take lots of words to get the AI to understand, and then special scanning of the text the AI produces to find out if it wants to use a calculator.


OpenAI saw that others were doing this the hard way, and developed functions. A model was pre-trained on understanding that user questions could be better fulfilled with outside tools without the user asking, and then also gave a specification framework that the AI will accept, letting it know how to emit special language to call an external function.

So that is what you are giving the AI: a specification of your function. It receives and outputs its natural language still, but using special containers that are recognized by OpenAI’s server endpoint and interpreted for you.


The specification is sent as a json schema (with spec modifications). The specification format chosen also allows you to tell the AI whether one of the fields that it can output is optional or required.

Let’s specify a function. The AI will receive this text (but in a special form it understands):

function_list=[
    {
        "name": "generate_image",
        "description": "AI image creator makes pictures from description",
        "parameters": {
            "type": "object",
            "properties": {
                "image_description": {
                    "type": "string",
                "image_resolution": {
                    "type": "string",
            "required": ["image_description"],
                }
            }
        }
    }
]

You can see that I put in the specification “required”, but only included one of the two properties as required, “description”. The other, “resolution” is seen as not required.

The AI will then know that it can specify a resolution for an image if needed, or it can leave it blank and some default value will be used by your function.

Then in your API code, you’ll receive a “function_call” response instead of “contents” to the user, containing json. Your own code should parse that AI language, and expect that the optional parameters may or may not be there.


The AI is the one that has decided if it will use an optional parameter, and even made the ultimate decision if it will invoke the function.

2 Likes

Thanks for the well-explained! I have learnt a lot from your response.:two_hearts::ok_hand::heart:
Nonetheless, I still cannot see another way to solve my problem.
I have decided to use ifs, it seems to work. It is not elegant.
I am using Angular (typescript), which is very restrict on types and function parameters.

If you have an API where there are very rigid inputs that won’t accept alternate formats, but there are two different ways that you may wish to receive data, you can also just make two completely different functions for the AI.

Besides the different number of parameters, you can enhance the AI decision-making with thorough descriptions along with the function and descriptions for each parameter.