Functions VS Tools - What is the difference?

I am wondering whether to use functions or tools in the ChatCompletions API with the newest gpt-3.5-turbo-1106 to do function calls as they all seem to have the same functionality. Then I checked the OpenAI cookbook and they use the tools parameter to insert the functions instead of the functions parameter. What is the difference and do I need to format the functions differently?

1 Like

+1

I’m seeing discrepancies abound:

Their python SDK cookbook was changed 9 hours ago from "role": "tool" to "role": "function"

However their node readme.md code (which we’re using) leverages "role": "tool"

Meanwhile their docs also use "role: "tool", so I guess the cookbook update is throwing me off:

Finally their completions.ts ChatCompletionToolMessageParam lacks a “name” attribute, which is present in the examples in their node repo

Wooo innovation :joy:

1 Like

I just found the answer here: https://platform.openai.com/docs/api-reference/chat/create

functions have been deprecated, and tools are the way forward.

Tools:
A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for.

In the response object there is also a new finish_reason “tool_calls”, which also mentions the deprecated function_call.

It seems that most cookbooks have been updated, but the guides haven’t yet.

1 Like

Thank you so much! It’s kind of a random update as it doesn’t seem like they behave differently.

You’re welcome! My guess is that they’re preparing the way for OpenAI hosted “tools”, such as GPTs or Assistants, that your agent can then call. Creating a multi-agent system basically.

1 Like

I agree with @dane.jordan.
I ran into the same issue with the name field. I had to remove the name field for Typescript to work.
I think the confusion on OpenAI’s side is that they are defining ChatCompletionMessageParam as a Discrimating Union type, but they are consuming it as if it is an Intersection type.

Discrimating Union as OpenAI defines:

export type ChatCompletionMessageParam =
  | ChatCompletionSystemMessageParam
  | ChatCompletionUserMessageParam
  | ChatCompletionAssistantMessageParam
  | ChatCompletionToolMessageParam
  | ChatCompletionFunctionMessageParam;

They are using it as if it was defined like this:

export type ChatCompletionMessageParam =
  ChatCompletionSystemMessageParam
  & ChatCompletionUserMessageParam
  & ChatCompletionAssistantMessageParam
  & ChatCompletionToolMessageParam
  & ChatCompletionFunctionMessageParam;

Hi, has any of you good folks figured out a way to stream the response while using Function Calls? I’m able to stream without function calls and display the entire result at once (without streaming) while using function calls. I read on another post that one can get streaming to work by dumping the function response as an assistant message into messages. In that case, are you running it through a loop? If anyone has got this to work, please do share your script. Thanks in advance!

Certainly. Do you expect “nobody knows how to make it work”?

Here is an example request that takes each stream chunk as they are received, and displays any content and adds any tool_call parts to a list:

c = client.chat.completions.with_raw_response.create(**params)
reply=""
tools=[]
for chunk in c.parse():
    # This gets content, which is clear text meant for a user
    if chunk.choices[0].delta.content:
        reply += chunk.choices[0].delta.content        # gather for chat history
        print(chunk.choices[0].delta.content, end="")  # your output method
    # This gets whole tool chunk objects, that need later assembly
    if chunk.choices[0].delta.tool_calls:
        tools += chunk.choices[0].delta.tool_calls     # gather ChoiceDeltaToolCall list chunks

What we have gathered delta parts in the “tools” list when they had that element in the json of the chunk.

At this point, we’ve now got a python list list of objects, (the library’s ChoiceDeltaToolCall). They look like this if we dump to dictionary:

for tool in tools:
    print(tool.model_dump())

collected from the streamed chunks:

{'index': 0, 'id': 'call_P0WmAn1FTpLTlJtz18a53z', 'function': {'arguments': '', 'name': 'get_random_float'}, 'type': 'function'}
{'index': 0, 'id': None, 'function': {'arguments': '{"', 'name': None}, 'type': None}
{'index': 0, 'id': None, 'function': {'arguments': 'range', 'name': None}, 'type': None}
{'index': 0, 'id': None, 'function': {'arguments': '_start', 'name': None}, 'type': None}
{'index': 0, 'id': None, 'function': {'arguments': '":', 'name': None}, 'type': None}
...

You can see then that we need to do a bit of parsing to put these back in the same type of standalone tool call object we’d get with non-streaming. Let’s use a function:

tools_obj = tool_list_to_tool_obj(tools)

Which I have written out before…and you can avoid the search by clicking.

1 Like