I am confused about function (via. tools) in Responses API

Hello,

I am currently trying to understand how to use Responses API and I am little confused function tool.

I can understand the concept of function in an AI assistant. Each function has a json_schema definition for input parameter of the function, and when the model decides to use function(s) it can only call it with the specified parameters. I think AI can be more accurate when it uses parameters like this.

However, I don’t understand why it is important or what type of application it can be useful in. Can anyone has any detailed resources or examples?

Thank you in advance.

It allows the model to interact with external things.

Like send an email, get the current weather in the real world, or use your own search engine implementation to retrieve better context for its reasoning.

It allows the AI to decide. Is a function “search_company_knowledge” going to be useful for satisfying a user question?

The actual function is code that you write. The AI only produces its request for the function as a response.

You receive the function and it’s parameters, like the AI writing, search_company_knowledge({"query":"price of dinglebops"}). Then your code must satisfy that.

A user need:

user: post a tweet “AI functions are neat”
assistant: tweet_tool({“user_id”: “_j”, “tweet_contents”: “AI functions are neat!”})

Then:

tool return: “success: tweet posted”
assistant: You should see your tweet that I sent for you now!

:brain: You’re not just posting an AI response and passing it off as a human writing, you’re showing how badly watermarked the AI output is with certain patterns.

Thank you for your answer. It is more clear now :slight_smile:

Function tool is basically a decision layer for actual functions in my codebase, right? As we consider users’ massage is a plain text it is not possible to understand the question and taking necessary data from the massage with the traditional code logics. Function tool just tells us which function is necessary and provide input parameters along with the respond object.

We live in times where it is hard to find genuine helpful human responses. :slight_smile:

1 Like

The function doesn’t do the work of deciding when it is necessary and useful: That is what the AI does.

  • An AI model will typically respond to YOU the user.

  • However, AI can instead respond to a function tool recipient when it starts producing an output.

You can experiment with functions in the Responses’ Prompts Playground. (Note that Chat Completions also has function-calling; it just doesn’t have internal tools developed by OpenAI such as “web search”)

Here I have a simple AI setup where I have created several tools that are constantly being offered to the AI (the specification is provided by an internal message):

Let’s examine the structure of one of these:

{
  "name": "fortune",
  "description": "Returns a horoscope-like sentence randomly.",
  "strict": true,
  "parameters": {
    "type": "object",
    "required": [],
    "properties": {},
    "additionalProperties": false
  }
}

It is the most simple of functions. The AI doesn’t have to write any parameters. Calling the function merely acts as a trigger.

Now let’s talk to the AI, and see if it can intuit a good function to employ:

Didn’t call a function. I just get what the AI knew it can already answer about.

Let’s be more explicit in my desires:

The playground environment has called a function. You can see the empty JSON, but the API call would return the function name along with its arguments to be satisfied.

In the playground, “tab” just fills in some AI simulation. My real code might have a long list of predictions from a fortune file. The example functions are meant to be easily programmed.

How it is actually serviced: The developer’s code must call the API again, sending back the assistant’s request (and its ID), along with the return value, which can be natural language - it doesn’t have to be JSON. Then there might be more function calls, or a final response to a user, where the AI has the language generation ability to employ that new knowledge obtained:

You can see the final response is also informed by the AI’s knowledge that it has fun facts also available as a function.

Here’s that playground preset, where you can talk to the AI, maybe as a D&D dungeon master who needs to find out if an attack was successful, or a programmer who needs a random seed value, and get an understanding of what your code can do to help the AI:

https://platform.openai.com/playground/p/8inCYbuAVpAXIu8trnEktGY0?mode=chat

2 Likes

That is a fantastic answer, it is crystal clear now. If I have more question I will let you know. Thank you, cheers.

1 Like